sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
5f55375edbfe0270c20bcf770751ad982c0e6614 |
# Dataset Card for MultiWOZ 2.1
- **Repository:** https://github.com/budzianowski/multiwoz
- **Paper:** https://aclanthology.org/2020.lrec-1.53
- **Leaderboard:** https://github.com/budzianowski/multiwoz
- **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com)
To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via:
```
from convlab.util import load_dataset, load_ontology, load_database
dataset = load_dataset('multiwoz21')
ontology = load_ontology('multiwoz21')
database = load_database('multiwoz21')
```
For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets).
### Dataset Summary
MultiWOZ 2.1 fixed the noise in state annotations and dialogue utterances. It also includes user dialogue acts from ConvLab (Lee et al., 2019) as well as multiple slot descriptions per dialogue state slot.
- **How to get the transformed data from original data:**
- Download [MultiWOZ_2.1.zip](https://github.com/budzianowski/multiwoz/blob/master/data/MultiWOZ_2.1.zip).
- Run `python preprocess.py` in the current directory.
- **Main changes of the transformation:**
- Create a new ontology in the unified format, taking slot descriptions from MultiWOZ 2.2.
- Correct some grammar errors in the text, mainly following `tokenization.md` in MultiWOZ_2.1.
- Normalize slot name and value. See `normalize_domain_slot_value` function in `preprocess.py`.
- Correct some non-categorical slots' values and provide character level span annotation.
- Concatenate multiple values in user goal & state using `|`.
- Add `booked` information in system turns from original belief states.
- Remove `Booking` domain and remap all booking relevant dialog acts to unify the annotation of booking action in different domains, see `booking_remapper.py`.
- **Annotations:**
- user goal, dialogue acts, state.
### Supported Tasks and Leaderboards
NLU, DST, Policy, NLG, E2E, User simulator
### Languages
English
### Data Splits
| split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) |
|------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------|
| train | 8438 | 113556 | 13.46 | 13.23 | 2.8 | 98.84 | 99.48 | 86.39 | 98.22 |
| validation | 1000 | 14748 | 14.75 | 13.5 | 2.98 | 98.84 | 99.46 | 86.59 | 98.17 |
| test | 1000 | 14744 | 14.74 | 13.5 | 2.93 | 99.21 | 99.32 | 85.83 | 98.58 |
| all | 10438 | 143048 | 13.7 | 13.28 | 2.83 | 98.88 | 99.47 | 86.35 | 98.25 |
8 domains: ['attraction', 'hotel', 'taxi', 'restaurant', 'train', 'police', 'hospital', 'general']
- **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage.
- **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage.
### Citation
```
@inproceedings{eric-etal-2020-multiwoz,
title = "{M}ulti{WOZ} 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines",
author = "Eric, Mihail and Goel, Rahul and Paul, Shachi and Sethi, Abhishek and Agarwal, Sanchit and Gao, Shuyag and Hakkani-Tur, Dilek",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.53",
pages = "422--428",
ISBN = "979-10-95546-34-4",
}
```
### Licensing Information
Apache License, Version 2.0 | ConvLab/multiwoz21 | [
"task_categories:conversational",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-04-01T02:32:58+00:00 | {"language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "MultiWOZ 2.1"} | 2022-11-25T08:00:28+00:00 | [] | [
"en"
] | TAGS
#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
| Dataset Card for MultiWOZ 2.1
=============================
* Repository: URL
* Paper: URL
* Leaderboard: URL
* Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com)
To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via:
For more usage please refer to here.
### Dataset Summary
MultiWOZ 2.1 fixed the noise in state annotations and dialogue utterances. It also includes user dialogue acts from ConvLab (Lee et al., 2019) as well as multiple slot descriptions per dialogue state slot.
* How to get the transformed data from original data:
+ Download MultiWOZ\_2.1.zip.
+ Run 'python URL' in the current directory.
* Main changes of the transformation:
+ Create a new ontology in the unified format, taking slot descriptions from MultiWOZ 2.2.
+ Correct some grammar errors in the text, mainly following 'URL' in MultiWOZ\_2.1.
+ Normalize slot name and value. See 'normalize\_domain\_slot\_value' function in 'URL'.
+ Correct some non-categorical slots' values and provide character level span annotation.
+ Concatenate multiple values in user goal & state using '|'.
+ Add 'booked' information in system turns from original belief states.
+ Remove 'Booking' domain and remap all booking relevant dialog acts to unify the annotation of booking action in different domains, see 'booking\_remapper.py'.
* Annotations:
+ user goal, dialogue acts, state.
### Supported Tasks and Leaderboards
NLU, DST, Policy, NLG, E2E, User simulator
### Languages
English
### Data Splits
8 domains: ['attraction', 'hotel', 'taxi', 'restaurant', 'train', 'police', 'hospital', 'general']
* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.
* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.
### Licensing Information
Apache License, Version 2.0
| [
"### Dataset Summary\n\n\nMultiWOZ 2.1 fixed the noise in state annotations and dialogue utterances. It also includes user dialogue acts from ConvLab (Lee et al., 2019) as well as multiple slot descriptions per dialogue state slot.\n\n\n* How to get the transformed data from original data:\n\t+ Download MultiWOZ\\_2.1.zip.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Create a new ontology in the unified format, taking slot descriptions from MultiWOZ 2.2.\n\t+ Correct some grammar errors in the text, mainly following 'URL' in MultiWOZ\\_2.1.\n\t+ Normalize slot name and value. See 'normalize\\_domain\\_slot\\_value' function in 'URL'.\n\t+ Correct some non-categorical slots' values and provide character level span annotation.\n\t+ Concatenate multiple values in user goal & state using '|'.\n\t+ Add 'booked' information in system turns from original belief states.\n\t+ Remove 'Booking' domain and remap all booking relevant dialog acts to unify the annotation of booking action in different domains, see 'booking\\_remapper.py'.\n* Annotations:\n\t+ user goal, dialogue acts, state.",
"### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E, User simulator",
"### Languages\n\n\nEnglish",
"### Data Splits\n\n\n\n8 domains: ['attraction', 'hotel', 'taxi', 'restaurant', 'train', 'police', 'hospital', 'general']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.",
"### Licensing Information\n\n\nApache License, Version 2.0"
] | [
"TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"### Dataset Summary\n\n\nMultiWOZ 2.1 fixed the noise in state annotations and dialogue utterances. It also includes user dialogue acts from ConvLab (Lee et al., 2019) as well as multiple slot descriptions per dialogue state slot.\n\n\n* How to get the transformed data from original data:\n\t+ Download MultiWOZ\\_2.1.zip.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Create a new ontology in the unified format, taking slot descriptions from MultiWOZ 2.2.\n\t+ Correct some grammar errors in the text, mainly following 'URL' in MultiWOZ\\_2.1.\n\t+ Normalize slot name and value. See 'normalize\\_domain\\_slot\\_value' function in 'URL'.\n\t+ Correct some non-categorical slots' values and provide character level span annotation.\n\t+ Concatenate multiple values in user goal & state using '|'.\n\t+ Add 'booked' information in system turns from original belief states.\n\t+ Remove 'Booking' domain and remap all booking relevant dialog acts to unify the annotation of booking action in different domains, see 'booking\\_remapper.py'.\n* Annotations:\n\t+ user goal, dialogue acts, state.",
"### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E, User simulator",
"### Languages\n\n\nEnglish",
"### Data Splits\n\n\n\n8 domains: ['attraction', 'hotel', 'taxi', 'restaurant', 'train', 'police', 'hospital', 'general']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.",
"### Licensing Information\n\n\nApache License, Version 2.0"
] |
0fbd5a29ff7f0c14ba9b11b878b05e8bdcc0a4c0 | Mitre technique subtechnqie | Ericblancosf/subtechnique | [
"region:us"
] | 2022-04-01T03:56:18+00:00 | {} | 2022-04-01T04:02:50+00:00 | [] | [] | TAGS
#region-us
| Mitre technique subtechnqie | [] | [
"TAGS\n#region-us \n"
] |
79ac5a87c5025ad4eb1713832edaf19838d8c10f | annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
languages: []
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: bioasq
size_categories:
- unknown
source_datasets:
- extended|pubmed_qa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa | tan9/bioasq | [
"region:us"
] | 2022-04-01T07:25:44+00:00 | {} | 2022-04-01T08:40:24+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
languages: []
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: bioasq
size_categories:
- unknown
source_datasets:
- extended|pubmed_qa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa | [] | [
"TAGS\n#region-us \n"
] |
3955cef83f919910b99d77369b44c09a67907ef2 | annotations_creators:
- other
language_creators:
- other
languages:
- en-US
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: pubmedqa
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa | tan9/pubmedQA | [
"region:us"
] | 2022-04-01T08:41:08+00:00 | {} | 2022-04-01T08:44:08+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- other
language_creators:
- other
languages:
- en-US
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: pubmedqa
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa | [] | [
"TAGS\n#region-us \n"
] |
b1aac0607073656a92770b7dc5766a02abef01d6 | # EAS Dataset
[](https://opendatacommons.org/licenses/odbl/)
Emotions Analytic System (EAS) on Instagram social network data
Nowadays, thanks to spread of social media, and large amount of data in Internet, the need for changing how we look and interpret data is evolving. Visualization is one of the most important fields in data science. About growing usage of social media, analyzing the data they contain is crucial. In this research, the Emotion Analytic System on Instagram social network data designed and developed. In this system, we analyze emotions and words that user writes, and visualize them by visualizing techniques. Over 370,000 Instagram comments have been collected with the help of data crawlers that we developed, after that we prepared the data and preprocessed them; including normalizing, finding the keywords and etc. The system is developed by Python.
This Dataset has over 370,000 preprocessed comments (that most of them are in Persian) from 40 instagram channels. These comments are crawled from 12 April 2017 (1396/01/26 A.H.S) to 29 July 2017 (1396/05/07 A.H.S).
# Citation
If you use this dataset in your publications, please cite this paper:
```
@article {
author = {Kiaei, Seyed Faridoddin and Dehghan Rouzi, Mohammad and Farzi, Saeed},
title = {Designing and Implementing an Emotion Analytic System (EAS) on Instagram Social Network Data},
journal = {International Journal of Web Research},
volume = {2},
number = {2},
pages = {9-14},
year = {2019},
publisher = {University of Science and Culture},
issn = {2645-4335},
eissn = {2645-4343},
doi = {10.22133/ijwr.2020.225574.1052},
keywords = {Emotion Analysis,visualization,Instagram,Election},
url = {http://ijwr.usc.ac.ir/article_110287.html},
eprint = {http://ijwr.usc.ac.ir/article_110287_ad2b34be8792fd3e55ae13ea0f367b7a.pdf}
}
```
| sfdkiaei/EAS | [
"region:us"
] | 2022-04-01T09:46:59+00:00 | {} | 2022-04-01T10:14:44+00:00 | [] | [] | TAGS
#region-us
| # EAS Dataset
 on Instagram social network data
Nowadays, thanks to spread of social media, and large amount of data in Internet, the need for changing how we look and interpret data is evolving. Visualization is one of the most important fields in data science. About growing usage of social media, analyzing the data they contain is crucial. In this research, the Emotion Analytic System on Instagram social network data designed and developed. In this system, we analyze emotions and words that user writes, and visualize them by visualizing techniques. Over 370,000 Instagram comments have been collected with the help of data crawlers that we developed, after that we prepared the data and preprocessed them; including normalizing, finding the keywords and etc. The system is developed by Python.
This Dataset has over 370,000 preprocessed comments (that most of them are in Persian) from 40 instagram channels. These comments are crawled from 12 April 2017 (1396/01/26 A.H.S) to 29 July 2017 (1396/05/07 A.H.S).
If you use this dataset in your publications, please cite this paper:
| [
"# EAS Dataset\n on Instagram social network data\n\nNowadays, thanks to spread of social media, and large amount of data in Internet, the need for changing how we look and interpret data is evolving. Visualization is one of the most important fields in data science. About growing usage of social media, analyzing the data they contain is crucial. In this research, the Emotion Analytic System on Instagram social network data designed and developed. In this system, we analyze emotions and words that user writes, and visualize them by visualizing techniques. Over 370,000 Instagram comments have been collected with the help of data crawlers that we developed, after that we prepared the data and preprocessed them; including normalizing, finding the keywords and etc. The system is developed by Python.\n\nThis Dataset has over 370,000 preprocessed comments (that most of them are in Persian) from 40 instagram channels. These comments are crawled from 12 April 2017 (1396/01/26 A.H.S) to 29 July 2017 (1396/05/07 A.H.S).\n\nIf you use this dataset in your publications, please cite this paper:"
] | [
"TAGS\n#region-us \n",
"# EAS Dataset\n on Instagram social network data\n\nNowadays, thanks to spread of social media, and large amount of data in Internet, the need for changing how we look and interpret data is evolving. Visualization is one of the most important fields in data science. About growing usage of social media, analyzing the data they contain is crucial. In this research, the Emotion Analytic System on Instagram social network data designed and developed. In this system, we analyze emotions and words that user writes, and visualize them by visualizing techniques. Over 370,000 Instagram comments have been collected with the help of data crawlers that we developed, after that we prepared the data and preprocessed them; including normalizing, finding the keywords and etc. The system is developed by Python.\n\nThis Dataset has over 370,000 preprocessed comments (that most of them are in Persian) from 40 instagram channels. These comments are crawled from 12 April 2017 (1396/01/26 A.H.S) to 29 July 2017 (1396/05/07 A.H.S).\n\nIf you use this dataset in your publications, please cite this paper:"
] |
649a061a8b9fc03aad2d3abd56c2e9ce42da42fd | Source: https://www.kaggle.com/datasets/djilax/pkmn-image-dataset | huggan/pokemon | [
"region:us"
] | 2022-04-01T10:44:34+00:00 | {} | 2022-04-01T10:50:45+00:00 | [] | [] | TAGS
#region-us
| Source: URL | [] | [
"TAGS\n#region-us \n"
] |
0689c984ee2d9fb5ffd7c91f0cfeb7bbaa43f2f9 |
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
español
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | hackathon-pln-es/es_tweets_laboral | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | 2022-04-01T12:20:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "pretty_name": "Tweets en espa\u00f1ol denuncia laboral"} | 2022-10-25T09:03:39+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Spanish #license-unknown #region-us
|
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
español
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [es_tweets_laboral]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\nDataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21\nEtiquetado por @DanielaGarciaQuezada\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages\n\nespañol",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Spanish #license-unknown #region-us \n",
"# Dataset Card for [es_tweets_laboral]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\nDataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21\nEtiquetado por @DanielaGarciaQuezada\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages\n\nespañol",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
40251166354d6348fdd75a258486e55260642a5d |
# Dataset Card for MetaShift
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
- **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
- **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
- **Point of Contact:** [Weixin Liang](mailto:[email protected])
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
- attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
- attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
- with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
- image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
```
The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
```python
load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
```
The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
3. To generate the dataset with additional image metadata information :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
```
4. Further, you can specify your own configuration different from those used in the papers as follows:
```python
load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
```
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
<p align='center'>
<img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
```
{
'image_id': '2411520',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
'label': 2,
'context': 'fence'
}
```
A sample from the MetaShift-Attributes dataset is provided below:
```
{
'image_id': '2401643',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
'label': 0
}
```
The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
```
{
'image_id': '2365745',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
'label': 0,
'context': 'ground',
'width': 500,
'height': 333,
'location': None,
'weather': None,
'objects':
{
'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
}
}
```
### Data Fields
- `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
- `image`: A PIL.Image.Image object containing the image.
- `label`: an int classification label.
- `context`: represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
[More Information Needed]
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Citation Information
```bibtex
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. | metashift | [
"task_categories:image-classification",
"task_categories:other",
"task_ids:multi-label-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"domain-generalization",
"arxiv:2202.06523",
"region:us"
] | 2022-04-01T14:16:57+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification", "other"], "task_ids": ["multi-label-image-classification"], "paperswithcode_id": "metashift", "pretty_name": "MetaShift", "tags": ["domain-generalization"], "dataset_info": {"features": [{"name": "image_id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog", "2": "bus", "3": "truck", "4": "elephant", "5": "horse", "6": "bowl", "7": "cup"}}}}, {"name": "context", "dtype": "string"}], "config_name": "metashift", "splits": [{"name": "train", "num_bytes": 16333509, "num_examples": 86808}], "download_size": 21878013674, "dataset_size": 16333509}} | 2024-01-18T11:19:04+00:00 | [
"2202.06523"
] | [
"en"
] | TAGS
#task_categories-image-classification #task_categories-other #task_ids-multi-label-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #domain-generalization #arxiv-2202.06523 #region-us
|
# Dataset Card for MetaShift
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: MetaShift homepage
- Repository: MetaShift repository
- Paper: MetaShift paper
- Point of Contact: Weixin Liang
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: 'list[string]', optional, list of the classes to generate the MetaShift dataset for. If 'None', the list is equal to '['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']'.
- attributes_dataset: 'bool', default 'False', if 'True', the script generates the MetaShift-Attributes dataset. Refer MetaShift-Attributes Dataset for more information.
- attributes: 'list[string]', optional, list of attributes classes included in the Attributes dataset. If 'None' and 'attributes_dataset' is 'True', it's equal to '["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]'. You can find the full attribute ontology in the above link.
- with_image_metadata: 'bool', default 'False', whether to include image metadata. If set to 'True', this will give additional metadata about each image. See Scene Graph for more information.
- image_subset_size_threshold: 'int', default '25', the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: 'int', default '5', the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
The full object vocabulary and its hierarchy can be seen here.
The default classes are '['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']'
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
The default attributes are '["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]'
3. To generate the dataset with additional image metadata information :
4. Further, you can specify your own configuration different from those used in the papers as follows:
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the 'generate_full_MetaShift.py' file.
<p align='center'>
<img width='75%' src='https://i.URL alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.URL alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.URL alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.URL alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.URL alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.URL alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
A sample from the MetaShift-Attributes dataset is provided below:
The format of the dataset with image metadata included by passing 'with_image_metadata=True' to 'load_dataset' is provided below:
### Data Fields
- 'image_id': Unique numeric ID of the image in Base Visual Genome dataset.
- 'image': A PIL.Image.Image object containing the image.
- 'label': an int classification label.
- 'context': represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen here and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Contributions
Thanks to @dnaveenr for adding this dataset. | [
"# Dataset Card for MetaShift",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: MetaShift homepage\n- Repository: MetaShift repository\n- Paper: MetaShift paper\n- Point of Contact: Weixin Liang",
"### Dataset Summary\n\nThe MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.\n\nThe authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.\nThe key idea is to cluster images using its metadata which provides context for each image.\nFor example : cats with cars or cats in bathroom.\nThe main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.\n\nTwo important benefits of MetaShift :\n- Contains orders of magnitude more natural data shifts than previously available.\n- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.",
"### Dataset Usage\n\nThe dataset has the following configuration parameters:\n- selected_classes: 'list[string]', optional, list of the classes to generate the MetaShift dataset for. If 'None', the list is equal to '['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']'.\n- attributes_dataset: 'bool', default 'False', if 'True', the script generates the MetaShift-Attributes dataset. Refer MetaShift-Attributes Dataset for more information.\n- attributes: 'list[string]', optional, list of attributes classes included in the Attributes dataset. If 'None' and 'attributes_dataset' is 'True', it's equal to '[\"cat(orange)\", \"cat(white)\", \"dog(sitting)\", \"dog(jumping)\"]'. You can find the full attribute ontology in the above link. \n- with_image_metadata: 'bool', default 'False', whether to include image metadata. If set to 'True', this will give additional metadata about each image. See Scene Graph for more information.\n- image_subset_size_threshold: 'int', default '25', the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.\n- min_local_groups: 'int', default '5', the minimum number of local groups required to be considered an object class.\n\nConsider the following examples to get an idea of how you can use the configuration parameters :\n\n1. To generate the MetaShift Dataset :\n\nThe full object vocabulary and its hierarchy can be seen here.\n\nThe default classes are '['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']'\n\n2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :\n\n\nThe default attributes are '[\"cat(orange)\", \"cat(white)\", \"dog(sitting)\", \"dog(jumping)\"]'\n\n3. To generate the dataset with additional image metadata information :\n\n4. Further, you can specify your own configuration different from those used in the papers as follows:",
"### Dataset Meta-Graphs\n\nFrom the MetaShift Github Repo :\n> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.\n\nThe following are the metagraphs for the default classes, these have been generated using the 'generate_full_MetaShift.py' file.\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Cat Meta-graph\" /> </br>\n <b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Dog Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Bus Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the “Bus” class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Elephant Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the \"Elephant\" class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Horse Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the \"Horse\" class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Truck Meta-graph\"/> </br>\n<b>Figure: Meta-graph for the Truck class. </b> \n</p>",
"### Supported Tasks and Leaderboards\n\nFrom the paper:\n> MetaShift supports evaluation on both :\n> - domain generalization and subpopulation shifts settings,\n> - assessing training conflicts.",
"### Languages\n\n All the classes and subsets use English as their primary language.",
"## Dataset Structure",
"### Data Instances\n\nA sample from the MetaShift dataset is provided below:\n\n\n\nA sample from the MetaShift-Attributes dataset is provided below:\n\n\nThe format of the dataset with image metadata included by passing 'with_image_metadata=True' to 'load_dataset' is provided below:",
"### Data Fields\n\n- 'image_id': Unique numeric ID of the image in Base Visual Genome dataset.\n- 'image': A PIL.Image.Image object containing the image. \n- 'label': an int classification label.\n- 'context': represents the context in which the label is seen. A given label could have multiple contexts.\n\nImage Metadata format can be seen here and a sample above has been provided for reference.",
"### Data Splits\n\nAll the data is contained in training set.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n> We present MetaShift as an important resource for studying the behavior of\nML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate\nits performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom the paper:\n> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.",
"#### Who are the source language producers?",
"### Annotations\n\nThe MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.",
"#### Annotation process\n\nFrom the Visual Genome paper :\n> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.",
"#### Who are the annotators?\n\nFrom the Visual Genome paper :\n> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nFrom the paper:\n> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the\nbase dataset of our MetaShift. Potential concerns include minority groups being under-represented\nin certain classes (e.g., women with snowboard), or annotation bias where people in images are\nby default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,\nquantifying, and mitigating biases in general computer vision datasets can help with addressing this\npotential negative societal impact.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nFrom the paper :\n> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).",
"### Contributions\n\nThanks to @dnaveenr for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_categories-other #task_ids-multi-label-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #domain-generalization #arxiv-2202.06523 #region-us \n",
"# Dataset Card for MetaShift",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: MetaShift homepage\n- Repository: MetaShift repository\n- Paper: MetaShift paper\n- Point of Contact: Weixin Liang",
"### Dataset Summary\n\nThe MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.\n\nThe authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.\nThe key idea is to cluster images using its metadata which provides context for each image.\nFor example : cats with cars or cats in bathroom.\nThe main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.\n\nTwo important benefits of MetaShift :\n- Contains orders of magnitude more natural data shifts than previously available.\n- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.",
"### Dataset Usage\n\nThe dataset has the following configuration parameters:\n- selected_classes: 'list[string]', optional, list of the classes to generate the MetaShift dataset for. If 'None', the list is equal to '['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']'.\n- attributes_dataset: 'bool', default 'False', if 'True', the script generates the MetaShift-Attributes dataset. Refer MetaShift-Attributes Dataset for more information.\n- attributes: 'list[string]', optional, list of attributes classes included in the Attributes dataset. If 'None' and 'attributes_dataset' is 'True', it's equal to '[\"cat(orange)\", \"cat(white)\", \"dog(sitting)\", \"dog(jumping)\"]'. You can find the full attribute ontology in the above link. \n- with_image_metadata: 'bool', default 'False', whether to include image metadata. If set to 'True', this will give additional metadata about each image. See Scene Graph for more information.\n- image_subset_size_threshold: 'int', default '25', the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.\n- min_local_groups: 'int', default '5', the minimum number of local groups required to be considered an object class.\n\nConsider the following examples to get an idea of how you can use the configuration parameters :\n\n1. To generate the MetaShift Dataset :\n\nThe full object vocabulary and its hierarchy can be seen here.\n\nThe default classes are '['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']'\n\n2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :\n\n\nThe default attributes are '[\"cat(orange)\", \"cat(white)\", \"dog(sitting)\", \"dog(jumping)\"]'\n\n3. To generate the dataset with additional image metadata information :\n\n4. Further, you can specify your own configuration different from those used in the papers as follows:",
"### Dataset Meta-Graphs\n\nFrom the MetaShift Github Repo :\n> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.\n\nThe following are the metagraphs for the default classes, these have been generated using the 'generate_full_MetaShift.py' file.\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Cat Meta-graph\" /> </br>\n <b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Dog Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Bus Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the “Bus” class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Elephant Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the \"Elephant\" class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Horse Meta-graph\" /> </br>\n<b>Figure: Meta-graph for the \"Horse\" class. </b> \n</p>\n\n<p align='center'>\n <img width='75%' src='https://i.URL alt=\"Truck Meta-graph\"/> </br>\n<b>Figure: Meta-graph for the Truck class. </b> \n</p>",
"### Supported Tasks and Leaderboards\n\nFrom the paper:\n> MetaShift supports evaluation on both :\n> - domain generalization and subpopulation shifts settings,\n> - assessing training conflicts.",
"### Languages\n\n All the classes and subsets use English as their primary language.",
"## Dataset Structure",
"### Data Instances\n\nA sample from the MetaShift dataset is provided below:\n\n\n\nA sample from the MetaShift-Attributes dataset is provided below:\n\n\nThe format of the dataset with image metadata included by passing 'with_image_metadata=True' to 'load_dataset' is provided below:",
"### Data Fields\n\n- 'image_id': Unique numeric ID of the image in Base Visual Genome dataset.\n- 'image': A PIL.Image.Image object containing the image. \n- 'label': an int classification label.\n- 'context': represents the context in which the label is seen. A given label could have multiple contexts.\n\nImage Metadata format can be seen here and a sample above has been provided for reference.",
"### Data Splits\n\nAll the data is contained in training set.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n> We present MetaShift as an important resource for studying the behavior of\nML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate\nits performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom the paper:\n> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.",
"#### Who are the source language producers?",
"### Annotations\n\nThe MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.",
"#### Annotation process\n\nFrom the Visual Genome paper :\n> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.",
"#### Who are the annotators?\n\nFrom the Visual Genome paper :\n> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nFrom the paper:\n> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the\nbase dataset of our MetaShift. Potential concerns include minority groups being under-represented\nin certain classes (e.g., women with snowboard), or annotation bias where people in images are\nby default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,\nquantifying, and mitigating biases in general computer vision datasets can help with addressing this\npotential negative societal impact.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nFrom the paper :\n> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).",
"### Contributions\n\nThanks to @dnaveenr for adding this dataset."
] |
d5d0b1e1251a49c870b8929dc10c3f03ea7dbb67 | Deprecated as of meerqat v4-alpha. See https://github.com/PaulLerner/ViQuAE | PaulLerner/viquae_passages | [
"region:us"
] | 2022-04-01T14:45:33+00:00 | {} | 2023-05-31T10:41:41+00:00 | [] | [] | TAGS
#region-us
| Deprecated as of meerqat v4-alpha. See URL | [] | [
"TAGS\n#region-us \n"
] |
2eec8352d97326bcba1de4687668e2602b22c110 |
## How to use the data sets
This dataset contains about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes from the PDB.
SMILES are assumed to be tokenized by the regex from P. Schwaller.
## Ligand selection criteria
Only ligands
- that have at least 3 atoms,
- a molecular weight >= 100 Da,
- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)
are considered.
### Use the already preprocessed data
Load a test/train split using
```
import pandas as pd
train = pd.read_pickle('data/pdb_train.p')
test = pd.read_pickle('data/pdb_test.p')
```
Receptor features contain protein frames and side chain angles in OpenFold/AlphaFold format.
Ligand tokens which do not correspond to atoms have `nan` as their coordinates.
Documentation by example:
```
>>> import pandas as pd
>>> test = pd.read_pickle('data/pdb_test.p')
>>> test.head(5)
pdb_id lig_id ... ligand_xyz_2d ligand_bonds
0 7k38 VTY ... [(-2.031355975502858, -1.6316778784387098, 0.0... [(0, 1), (1, 4), (4, 5), (5, 10), (10, 9), (9,...
1 6prt OWA ... [(4.883261310160714, -0.37850716807626705, 0.0... [(11, 18), (18, 20), (20, 8), (8, 7), (7, 2), ...
2 4lxx FNF ... [(8.529427756002057, 2.2434809270065372, 0.0),... [(51, 49), (49, 48), (48, 46), (46, 53), (53, ...
3 4lxx FON ... [(-10.939694946697701, -1.1876214529096956, 0.... [(13, 1), (1, 0), (0, 3), (3, 4), (4, 7), (7, ...
4 7bp1 CAQ ... [(-1.9485571585149868, -1.499999999999999, 0.0... [(4, 3), (3, 1), (1, 0), (0, 7), (7, 9), (7, 6...
[5 rows x 8 columns]
>>> test.columns
Index(['pdb_id', 'lig_id', 'seq', 'smiles', 'receptor_features', 'ligand_xyz',
'ligand_xyz_2d', 'ligand_bonds'],
dtype='object')
>>> test.iloc[0]['receptor_features']
{'rigidgroups_gt_frames': array([[[[-5.3122622e-01, 2.0922849e-01, -8.2098854e-01,
1.7295000e+01],
[-7.1005428e-01, -6.3858479e-01, 2.9670244e-01,
-9.1399997e-01],
[-4.6219218e-01, 7.4056256e-01, 4.8779655e-01,
3.3284000e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.0000000e+00]],
...
[[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
-3.5030000e+00],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
2.6764999e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.5136000e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.0000000e+00]]]], dtype=float32), 'torsion_angles_sin_cos': array([[[-1.90855725e-09, 3.58859784e-02],
[ 1.55730803e-01, 9.87799530e-01],
[ 6.05505241e-01, -7.95841312e-01],
...,
[-2.92459433e-01, -9.56277928e-01],
[ 9.96634814e-01, -8.19697779e-02],
[ 0.00000000e+00, 0.00000000e+00]],
...
[[ 2.96455977e-04, -9.99999953e-01],
[-8.15660990e-01, 5.78530158e-01],
[-3.17915569e-01, 9.48119024e-01],
...,
[ 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00]]])}
>>> test.iloc[0]['receptor_features'].keys()
dict_keys(['rigidgroups_gt_frames', 'torsion_angles_sin_cos'])
>>> test.iloc[0]['ligand_xyz']
[(22.289, 11.985, 9.225), (21.426, 11.623, 7.959), (nan, nan, nan), (nan, nan, nan), (21.797, 11.427, 6.574), (20.556, 11.56, 5.792), (nan, nan, nan), (20.507, 11.113, 4.552), (nan, nan, nan), (19.581, 10.97, 6.639), (20.107, 10.946, 7.954), (nan, nan, nan), (nan, nan, nan), (19.645, 10.364, 8.804)]
```
### Manual update from PDB
```
# download the PDB archive into folder pdb/
sh rsync.sh 24 # number of parallel download processes
# extract sequences and coordinates in parallel
sbatch pdb.slurm
# or
mpirun -n 42 parse_complexes.py # desired number of tasks
```
| jglaser/pdb_protein_ligand_complexes | [
"proteins",
"molecules",
"chemistry",
"SMILES",
"complex structures",
"region:us"
] | 2022-04-01T18:14:21+00:00 | {"tags": ["proteins", "molecules", "chemistry", "SMILES", "complex structures"]} | 2022-10-13T14:09:57+00:00 | [] | [] | TAGS
#proteins #molecules #chemistry #SMILES #complex structures #region-us
|
## How to use the data sets
This dataset contains about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes from the PDB.
SMILES are assumed to be tokenized by the regex from P. Schwaller.
## Ligand selection criteria
Only ligands
- that have at least 3 atoms,
- a molecular weight >= 100 Da,
- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)
are considered.
### Use the already preprocessed data
Load a test/train split using
Receptor features contain protein frames and side chain angles in OpenFold/AlphaFold format.
Ligand tokens which do not correspond to atoms have 'nan' as their coordinates.
Documentation by example:
### Manual update from PDB
| [
"## How to use the data sets\n\nThis dataset contains about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates\nof their complexes from the PDB.\n\nSMILES are assumed to be tokenized by the regex from P. Schwaller.",
"## Ligand selection criteria\n\nOnly ligands\n\n- that have at least 3 atoms,\n- a molecular weight >= 100 Da,\n- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)\n\nare considered.",
"### Use the already preprocessed data\n\nLoad a test/train split using\n\n\n\nReceptor features contain protein frames and side chain angles in OpenFold/AlphaFold format.\nLigand tokens which do not correspond to atoms have 'nan' as their coordinates.\n\nDocumentation by example:",
"### Manual update from PDB"
] | [
"TAGS\n#proteins #molecules #chemistry #SMILES #complex structures #region-us \n",
"## How to use the data sets\n\nThis dataset contains about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates\nof their complexes from the PDB.\n\nSMILES are assumed to be tokenized by the regex from P. Schwaller.",
"## Ligand selection criteria\n\nOnly ligands\n\n- that have at least 3 atoms,\n- a molecular weight >= 100 Da,\n- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)\n\nare considered.",
"### Use the already preprocessed data\n\nLoad a test/train split using\n\n\n\nReceptor features contain protein frames and side chain angles in OpenFold/AlphaFold format.\nLigand tokens which do not correspond to atoms have 'nan' as their coordinates.\n\nDocumentation by example:",
"### Manual update from PDB"
] |
e3789d92458aeb34a189a1fff9863e6d248d891a | # Dataset Card for biomed_squad_es_v2
This Dataset was created as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a subset of the [dev squad_es (v2) dataset](https://huggingface.co/datasets/squad_es) (automatic translation of the Stanford Question Answering Dataset v2 into Spanish) containing questions related to the biomedical domain.
License, distribution and usage conditions of the original Squad_es Dataset apply.
### Languages
Spanish
## Dataset Structure
### Data Fields
```
{'answers': {'answer_start': [343, 343, 343],
'text': ['diez veces su propio peso',
'diez veces su propio peso',
'diez veces su propio peso']},
'context': 'Casi todos los ctenóforos son depredadores, tomando presas que van desde larvas microscópicas y rotíferos a los adultos de pequeños crustáceos; Las excepciones son los juveniles de dos especies, que viven como parásitos en las salpas en las que los adultos de su especie se alimentan. En circunstancias favorables, los ctenóforos pueden comer diez veces su propio peso en un día. Sólo 100-150 especies han sido validadas, y posiblemente otras 25 no han sido completamente descritas y nombradas. Los ejemplos de libros de texto son cidipidos con cuerpos en forma de huevo y un par de tentáculos retráctiles bordeados con tentilla ("pequeños tentáculos") que están cubiertos con colúnculos, células pegajosas. El filo tiene una amplia gama de formas corporales, incluyendo los platyctenidos de mar profundo, en los que los adultos de la mayoría de las especies carecen de peines, y los beroides costeros, que carecen de tentáculos. Estas variaciones permiten a las diferentes especies construir grandes poblaciones en la misma área, porque se especializan en diferentes tipos de presas, que capturan por una amplia gama de métodos que utilizan las arañas.',
'id': '5725c337271a42140099d165',
'question': '¿Cuánta comida come un Ctenophora en un día?',
'title': 'Ctenophora'}
```
### Data Splits
Validation: 1137 examples
### Citation Information
```
@article{2016arXiv160605250R,
author = {Casimiro Pio , Carrino and Marta R. , Costa-jussa and Jose A. R. , Fonollosa},
title = "{Automatic Spanish Translation of the SQuAD Dataset for Multilingual
Question Answering}",
journal = {arXiv e-prints},
year = 2019,
eid = {arXiv:1912.05200v1},
pages = {arXiv:1912.05200v1},
archivePrefix = {arXiv},
eprint = {1912.05200v2},
}
```
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) | hackathon-pln-es/biomed_squad_es_v2 | [
"arxiv:1912.05200",
"region:us"
] | 2022-04-02T02:05:44+00:00 | {} | 2022-04-03T16:46:58+00:00 | [
"1912.05200"
] | [] | TAGS
#arxiv-1912.05200 #region-us
| # Dataset Card for biomed_squad_es_v2
This Dataset was created as part of the "Extractive QA Biomedicine" project developed during the 2022 Hackathon organized by SOMOS NLP.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This is a subset of the dev squad_es (v2) dataset (automatic translation of the Stanford Question Answering Dataset v2 into Spanish) containing questions related to the biomedical domain.
License, distribution and usage conditions of the original Squad_es Dataset apply.
### Languages
Spanish
## Dataset Structure
### Data Fields
### Data Splits
Validation: 1137 examples
## Team
Santiago Maximo: smaximo | [
"# Dataset Card for biomed_squad_es_v2\n\nThis Dataset was created as part of the \"Extractive QA Biomedicine\" project developed during the 2022 Hackathon organized by SOMOS NLP.",
"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis is a subset of the dev squad_es (v2) dataset (automatic translation of the Stanford Question Answering Dataset v2 into Spanish) containing questions related to the biomedical domain.\n\nLicense, distribution and usage conditions of the original Squad_es Dataset apply.",
"### Languages\n\nSpanish",
"## Dataset Structure",
"### Data Fields",
"### Data Splits\n\nValidation: 1137 examples",
"## Team\nSantiago Maximo: smaximo"
] | [
"TAGS\n#arxiv-1912.05200 #region-us \n",
"# Dataset Card for biomed_squad_es_v2\n\nThis Dataset was created as part of the \"Extractive QA Biomedicine\" project developed during the 2022 Hackathon organized by SOMOS NLP.",
"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis is a subset of the dev squad_es (v2) dataset (automatic translation of the Stanford Question Answering Dataset v2 into Spanish) containing questions related to the biomedical domain.\n\nLicense, distribution and usage conditions of the original Squad_es Dataset apply.",
"### Languages\n\nSpanish",
"## Dataset Structure",
"### Data Fields",
"### Data Splits\n\nValidation: 1137 examples",
"## Team\nSantiago Maximo: smaximo"
] |
db1a30d40f031f31dc49353f7dc40b08fea6719a |
# RuNNE dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
Part of NEREL dataset (https://arxiv.org/abs/2108.13112), a Russian dataset
for named entity recognition and relation extraction, used in RuNNE (2022)
competition (https://github.com/dialogue-evaluation/RuNNE).
Entities may be nested (see https://arxiv.org/abs/2108.13112).
Entity types list:
* AGE
* AWARD
* CITY
* COUNTRY
* CRIME
* DATE
* DISEASE
* DISTRICT
* EVENT
* FACILITY
* FAMILY
* IDEOLOGY
* LANGUAGE
* LAW
* LOCATION
* MONEY
* NATIONALITY
* NUMBER
* ORDINAL
* ORGANIZATION
* PENALTY
* PERCENT
* PERSON
* PRODUCT
* PROFESSION
* RELIGION
* STATE_OR_PROVINCE
* TIME
* WORK_OF_ART
## Dataset Structure
There are two "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({
features: ['type'],
num_rows: 29
})
)
Using
`load_dataset('MalakhovIlya/RuNNE', 'data')` or `load_dataset('MalakhovIlya/RuNNE')`
you can download the data itself (DatasetDict)
Dataset consists of 3 splits: "train", "test" and "dev". Each of them contains text document. "Train" and "test" splits also contain annotated entities, "dev" doesn't.
Each entity is represented by a string of the following format: "\<start> \<stop> \<type>", where \<start> is a position of the first symbol of entity in text, \<stop> is the last symbol position in text and \<type> is a one of the aforementioned list of types.
P.S.
Original NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\_(ツ)_/¯
## Citation Information
@article{Artemova2022runne,
title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},
author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},
journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference "Dialog"},
year={2022}
}
| iluvvatar/RuNNE | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"arxiv:2108.13112",
"region:us"
] | 2022-04-02T06:55:42+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "RuNNE"} | 2023-03-30T12:36:53+00:00 | [
"2108.13112"
] | [
"ru"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #arxiv-2108.13112 #region-us
|
# RuNNE dataset
## Table of Contents
- Dataset Description
- Dataset Structure
- Citation Information
- Contacts
## Dataset Description
Part of NEREL dataset (URL a Russian dataset
for named entity recognition and relation extraction, used in RuNNE (2022)
competition (URL
Entities may be nested (see URL
Entity types list:
* AGE
* AWARD
* CITY
* COUNTRY
* CRIME
* DATE
* DISEASE
* DISTRICT
* EVENT
* FACILITY
* FAMILY
* IDEOLOGY
* LANGUAGE
* LAW
* LOCATION
* MONEY
* NATIONALITY
* NUMBER
* ORDINAL
* ORGANIZATION
* PENALTY
* PERCENT
* PERSON
* PRODUCT
* PROFESSION
* RELIGION
* STATE_OR_PROVINCE
* TIME
* WORK_OF_ART
## Dataset Structure
There are two "configs" or "subsets" of the dataset.
Using
'load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']'
you can download list of entity types (
Dataset({
features: ['type'],
num_rows: 29
})
)
Using
'load_dataset('MalakhovIlya/RuNNE', 'data')' or 'load_dataset('MalakhovIlya/RuNNE')'
you can download the data itself (DatasetDict)
Dataset consists of 3 splits: "train", "test" and "dev". Each of them contains text document. "Train" and "test" splits also contain annotated entities, "dev" doesn't.
Each entity is represented by a string of the following format: "\<start> \<stop> \<type>", where \<start> is a position of the first symbol of entity in text, \<stop> is the last symbol position in text and \<type> is a one of the aforementioned list of types.
P.S.
Original NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\_(ツ)_/¯
@article{Artemova2022runne,
title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},
author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},
journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference "Dialog"},
year={2022}
}
| [
"# RuNNE dataset",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Citation Information\n- Contacts",
"## Dataset Description\nPart of NEREL dataset (URL a Russian dataset\nfor named entity recognition and relation extraction, used in RuNNE (2022)\ncompetition (URL\n\nEntities may be nested (see URL\n\nEntity types list:\n* AGE\n* AWARD\n* CITY\n* COUNTRY\n* CRIME\n* DATE\n* DISEASE\n* DISTRICT\n* EVENT\n* FACILITY\n* FAMILY\n* IDEOLOGY\n* LANGUAGE\n* LAW\n* LOCATION\n* MONEY\n* NATIONALITY\n* NUMBER\n* ORDINAL\n* ORGANIZATION\n* PENALTY\n* PERCENT\n* PERSON\n* PRODUCT\n* PROFESSION\n* RELIGION\n* STATE_OR_PROVINCE\n* TIME\n* WORK_OF_ART",
"## Dataset Structure\nThere are two \"configs\" or \"subsets\" of the dataset. \nUsing \n'load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']' \nyou can download list of entity types (\nDataset({\n features: ['type'],\n num_rows: 29\n})\n) \nUsing \n'load_dataset('MalakhovIlya/RuNNE', 'data')' or 'load_dataset('MalakhovIlya/RuNNE')'\nyou can download the data itself (DatasetDict)\n\nDataset consists of 3 splits: \"train\", \"test\" and \"dev\". Each of them contains text document. \"Train\" and \"test\" splits also contain annotated entities, \"dev\" doesn't.\nEach entity is represented by a string of the following format: \"\\<start> \\<stop> \\<type>\", where \\<start> is a position of the first symbol of entity in text, \\<stop> is the last symbol position in text and \\<type> is a one of the aforementioned list of types.\n\nP.S.\nOriginal NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\\\\_(ツ)_/¯\n\n\n@article{Artemova2022runne, \n title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},\n author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},\n journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference \"Dialog\"},\n year={2022}\n}"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #arxiv-2108.13112 #region-us \n",
"# RuNNE dataset",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Citation Information\n- Contacts",
"## Dataset Description\nPart of NEREL dataset (URL a Russian dataset\nfor named entity recognition and relation extraction, used in RuNNE (2022)\ncompetition (URL\n\nEntities may be nested (see URL\n\nEntity types list:\n* AGE\n* AWARD\n* CITY\n* COUNTRY\n* CRIME\n* DATE\n* DISEASE\n* DISTRICT\n* EVENT\n* FACILITY\n* FAMILY\n* IDEOLOGY\n* LANGUAGE\n* LAW\n* LOCATION\n* MONEY\n* NATIONALITY\n* NUMBER\n* ORDINAL\n* ORGANIZATION\n* PENALTY\n* PERCENT\n* PERSON\n* PRODUCT\n* PROFESSION\n* RELIGION\n* STATE_OR_PROVINCE\n* TIME\n* WORK_OF_ART",
"## Dataset Structure\nThere are two \"configs\" or \"subsets\" of the dataset. \nUsing \n'load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']' \nyou can download list of entity types (\nDataset({\n features: ['type'],\n num_rows: 29\n})\n) \nUsing \n'load_dataset('MalakhovIlya/RuNNE', 'data')' or 'load_dataset('MalakhovIlya/RuNNE')'\nyou can download the data itself (DatasetDict)\n\nDataset consists of 3 splits: \"train\", \"test\" and \"dev\". Each of them contains text document. \"Train\" and \"test\" splits also contain annotated entities, \"dev\" doesn't.\nEach entity is represented by a string of the following format: \"\\<start> \\<stop> \\<type>\", where \\<start> is a position of the first symbol of entity in text, \\<stop> is the last symbol position in text and \\<type> is a one of the aforementioned list of types.\n\nP.S.\nOriginal NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\\\\_(ツ)_/¯\n\n\n@article{Artemova2022runne, \n title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},\n author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},\n journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference \"Dialog\"},\n year={2022}\n}"
] |
64fd51e4bb4d4d41e59df46d597725468c716c97 |
为了对 like-BERT 预训练模型进行 fine-tune 调优和评测以得到更好的文本表征模,对业界开源的语义相似(STS)、自然语言推理(NLI)、问题匹配(QMC)以及相关性等数据集进行了搜集整理,具体介绍如下:
| 类型 | 数据集 | 简介 | 规模 |
| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
| **通用领域** | [OCNLI](https://www.cluebenchmarks.com/introduce.html) | 原生中文自然语言推理数据集,是第一个非翻译的、使用原生汉语的大型中文自然语言推理数据集。OCNLI为中文语言理解基准测评(CLUE)的一部分。 | **Train**: 50437, **Dev**: 2950 |
| | [CMNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 XNLI 和 MNLI,曾经是中文语言理解基准测评(CLUE)的一部分,现在被 OCNLI 取代。 | **Train**: 391783, **Dev**: 12241 |
| | [CSNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 SNLI。 | **Train**: 545833, **Dev**: 9314, **Test**: 9176 |
| | [STS-B-Chinese](https://github.com/pluto-junzeng/CNSD) | 翻译自英文语义相似数据集 STSbenchmark。 | **Train**: 5231, **Dev**: 1458, **Test**: 1361 |
| | [PAWS-X](https://www.luge.ai/#/luge/dataDetail?id=16) | 释义(含义)匹配数据集,特点是具有高度重叠词汇,重点考察模型对句法结构的理解能力。 | **Train**: 49401, **Dev**: 2000, **Test**: 2000 |
| | [PKU-Paraphrase-Bank](https://github.com/pkucoli/PKU-Paraphrase-Bank/) | 中文语句复述数据集,也即一句话换种方式描述但语义保持一致。 | 共509832个语句对 |
| **问题匹配** | [LCQMC](https://www.luge.ai/#/luge/dataDetail?id=14) | 百度知道领域的中文问题匹配大规模数据集,该数据集从百度知道不同领域的用户问题中抽取构建数据。 | **Train**: 238766, **Dev**: 8802, **Test**: 12500 |
| | [BQCorpus](https://www.luge.ai/#/luge/dataDetail?id=15) | 银行金融领域的问题匹配数据,包括了从一年的线上银行系统日志里抽取的问题pair对,是目前最大的银行领域问题匹配数据。 | **Train**: 100000, **Dev**: 10000, **Test**: 10000 |
| | [AFQMC](https://www.cluebenchmarks.com/introduce.html) | 蚂蚁金服真实金融业务场景中的问题匹配数据集(已脱敏),是中文语言理解基准测评(CLUE)的一部分。 | **Train**: 34334, **Dev**: 4316 |
| | [DuQM](https://www.luge.ai/#/luge/dataDetail?id=27) | 问题匹配评测数据集(label没有公开),属于百度大规模阅读理解数据集(DuReader)的[一部分](https://github.com/baidu/DuReader/tree/master/DuQM)。 | 共50000个语句对 |
| **对话和搜索** | [BUSTM: OPPO-xiaobu](https://www.luge.ai/#/luge/dataDetail?id=28) | 通过对闲聊、智能客服、影音娱乐、信息查询等多领域真实用户交互语料进行用户信息脱敏、相似度筛选处理得到,该对话匹配(Dialogue Short Text Matching)数据集主要特点是文本较短、非常口语化、存在文本高度相似而语义不同的难例。 | **Train**: 167173, **Dev**: 10000 |
| | [QBQTC](https://github.com/CLUEbenchmark/QBQTC) | QQ浏览器搜索相关性数据集(QBQTC,QQ Browser Query Title Corpus),是QQ浏览器搜索引擎目前针对大搜场景构建的一个融合了相关性、权威性、内容质量、 时效性等维度标注的学习排序(LTR)数据集,广泛应用在搜索引擎业务场景中。(相关性的含义:0,相关程度差;1,有一定相关性;2,非常相关。) | **Train**: 180000, **Dev**: 20000, **Test**: 5000 |
*以上数据集主要收集整理自[CLUE](https://www.cluebenchmarks.com/introduce.html)(中文语言理解基准评测数据集)、[SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) (整合许多开源文本相似数据集)、[百度千言](https://www.luge.ai/#/)的文本相似度等数据集。*
根据以上收集的数据集构建如下**评测 benchmark**:
| Name | Size | Type |
| ---------------------- | ----- | ------------- |
| **Chinese-STS-B-dev** | 1458 | label=0.0~1.0 |
| **Chinese-STS-B-test** | 1361 | label=0.0~1.0 |
| **afqmc-dev** | 4316 | label=0,1 |
| **lcqmc-dev** | 8802 | label=0,1 |
| **bqcorpus-dev** | 10000 | label=0,1 |
| **pawsx_dev** | 2000 | label=0,1 |
| **oppo-xiaobu-dev** | 10000 | label=0,1 |
*TODO:目前收集的数据集不论是数量还是多样性都需要进一步扩充以更真实的反映表征模型的效果*
| DMetaSoul/chinese-semantic-textual-similarity | [
"license:apache-2.0",
"region:us"
] | 2022-04-02T09:10:43+00:00 | {"license": "apache-2.0"} | 2022-04-02T09:38:47+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| 为了对 like-BERT 预训练模型进行 fine-tune 调优和评测以得到更好的文本表征模,对业界开源的语义相似(STS)、自然语言推理(NLI)、问题匹配(QMC)以及相关性等数据集进行了搜集整理,具体介绍如下:
*以上数据集主要收集整理自CLUE(中文语言理解基准评测数据集)、SimCLUE (整合许多开源文本相似数据集)、百度千言的文本相似度等数据集。*
根据以上收集的数据集构建如下评测 benchmark:
Name: Chinese-STS-B-dev, Size: 1458, Type: label=0.0~1.0
Name: Chinese-STS-B-test, Size: 1361, Type: label=0.0~1.0
Name: afqmc-dev, Size: 4316, Type: label=0,1
Name: lcqmc-dev, Size: 8802, Type: label=0,1
Name: bqcorpus-dev, Size: 10000, Type: label=0,1
Name: pawsx\_dev, Size: 2000, Type: label=0,1
Name: oppo-xiaobu-dev, Size: 10000, Type: label=0,1
*TODO:目前收集的数据集不论是数量还是多样性都需要进一步扩充以更真实的反映表征模型的效果*
| [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
a6b8d891d393e97a4efac791afffb2d7de5e57c6 | # Dataset Card for fever_gold_evidence
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
- **Repository:** https://github.com/copenlu/fever-adversarial-attacks
- **Paper:** https://aclanthology.org/2020.emnlp-main.256/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in https://github.com/copenlu/fever-adversarial-attacks
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{atanasova-etal-2020-generating,
title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
author = "Atanasova, Pepa and
Wright, Dustin and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.256",
doi = "10.18653/v1/2020.emnlp-main.256",
pages = "3168--3177",
abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}
``` | copenlu/fever_gold_evidence | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-02T13:52:35+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["machine-generated", "crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|fever"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "fever", "pretty_name": ""} | 2022-11-17T11:42:54+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
| # Dataset Card for fever_gold_evidence
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact:
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in URL
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for fever_gold_evidence",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nDataset for training classification-only fact checking with claims from the FEVER dataset.\nThis dataset is used in the paper \"Generating Label Cohesive and Well-Formed Adversarial Claims\", EMNLP 2020\n\nThe evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.\nFor *NEI* claims, we extract evidence sentences with the system in \"Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the\nFirst Workshop on Fact Extraction and VERification (FEVER), pages 109\u0013113.\"\nMore details can be found in URL",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n",
"# Dataset Card for fever_gold_evidence",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nDataset for training classification-only fact checking with claims from the FEVER dataset.\nThis dataset is used in the paper \"Generating Label Cohesive and Well-Formed Adversarial Claims\", EMNLP 2020\n\nThe evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.\nFor *NEI* claims, we extract evidence sentences with the system in \"Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the\nFirst Workshop on Fact Extraction and VERification (FEVER), pages 109\u0013113.\"\nMore details can be found in URL",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
d267838191dbf769374ef1f8ce0c0a04a413b18a | # Wordnet definitions for English
Dataset by Princeton WordNet and the Open English WordNet team
https://github.com/globalwordnet/english-wordnet
This dataset contains every entry in wordnet that has a definition and an example.
Be aware that the word "null" can be misinterpreted as a null value if loading it in with e.g. pandas | marksverdhei/wordnet-definitions-en-2021 | [
"region:us"
] | 2022-04-02T18:02:14+00:00 | {} | 2022-04-04T20:55:03+00:00 | [] | [] | TAGS
#region-us
| # Wordnet definitions for English
Dataset by Princeton WordNet and the Open English WordNet team
URL
This dataset contains every entry in wordnet that has a definition and an example.
Be aware that the word "null" can be misinterpreted as a null value if loading it in with e.g. pandas | [
"# Wordnet definitions for English\n\nDataset by Princeton WordNet and the Open English WordNet team\nURL\n\nThis dataset contains every entry in wordnet that has a definition and an example.\nBe aware that the word \"null\" can be misinterpreted as a null value if loading it in with e.g. pandas"
] | [
"TAGS\n#region-us \n",
"# Wordnet definitions for English\n\nDataset by Princeton WordNet and the Open English WordNet team\nURL\n\nThis dataset contains every entry in wordnet that has a definition and an example.\nBe aware that the word \"null\" can be misinterpreted as a null value if loading it in with e.g. pandas"
] |
49cf0593a2baf2fd848d81470d7c439c3ab8d3ec | This dataset was previously created in Kaggle by [Andrea Morales Garzón](https://huggingface.co/andreamorgar).
[Link Kaggle](https://www.kaggle.com/andreamorgar/spanish-poetry-dataset/version/1) | hackathon-pln-es/spanish-poetry-dataset | [
"region:us"
] | 2022-04-03T02:31:57+00:00 | {} | 2022-04-03T02:34:26+00:00 | [] | [] | TAGS
#region-us
| This dataset was previously created in Kaggle by Andrea Morales Garzón.
Link Kaggle | [] | [
"TAGS\n#region-us \n"
] |
aa48b3c7f4d0c1450f8f2df27ceb8a882b022600 |
# Spanish to Quechua
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [team members](#team-members)
## Dataset Description
This dataset is a recopilation of webs and others datasets that shows in [dataset creation section](#dataset-creation). This contains translations from spanish (es) to Qechua of Ayacucho (qu).
## Dataset Structure
### Data Fields
- es: The sentence in Spanish.
- qu: The sentence in Quechua of Ayacucho.
### Data Splits
- train: To train the model (102 747 sentences).
- Validation: To validate the model during training (12 844 sentences).
- test: To evaluate the model when the training is finished (12 843 sentences).
## Dataset Creation
### Source Data
This dataset has generated from:
- "Mundo Quechua" by "Ivan Acuña" - [available here](https://mundoquechua.blogspot.com/2006/07/frases-comunes-en-quechua.html)
- "Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua" by "El comercio" - [available here](https://elcomercio.pe/tecnologia/actualidad/traductor-frases-romanticas-quechua-noticia-467022-noticia/)
- "Piropos y frases de amor en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2019/12/palabras-en-quechua-de-amor.html)
- "Corazón en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2020/05/corazon-en-quechua.html)
- "Oraciones en Español traducidas a Quechua" by "Tatoeba" - [available here](https://tatoeba.org/es/sentences/search?from=spa&query=&to=que)
- "AmericasNLP 2021 Shared Task on Open Machine Translation" by "americasnlp2021" - [available here](https://github.com/AmericasNLP/americasnlp2021/tree/main/data/quechua-spanish/parallel_data/es-quy)
### Data cleaning
- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.
## Considerations for Using the Data
This is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.
## Team members
- [Sara Benel](https://huggingface.co/sbenel)
- [Jose Vílchez](https://huggingface.co/JCarlos) | hackathon-pln-es/spanish-to-quechua | [
"task_categories:translation",
"language:es",
"language:qu",
"region:us"
] | 2022-04-03T03:02:58+00:00 | {"language": ["es", "qu"], "task_categories": ["translation"], "task": ["translation"]} | 2022-10-25T09:03:46+00:00 | [] | [
"es",
"qu"
] | TAGS
#task_categories-translation #language-Spanish #language-Quechua #region-us
|
# Spanish to Quechua
## Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- team members
## Dataset Description
This dataset is a recopilation of webs and others datasets that shows in dataset creation section. This contains translations from spanish (es) to Qechua of Ayacucho (qu).
## Dataset Structure
### Data Fields
- es: The sentence in Spanish.
- qu: The sentence in Quechua of Ayacucho.
### Data Splits
- train: To train the model (102 747 sentences).
- Validation: To validate the model during training (12 844 sentences).
- test: To evaluate the model when the training is finished (12 843 sentences).
## Dataset Creation
### Source Data
This dataset has generated from:
- "Mundo Quechua" by "Ivan Acuña" - available here
- "Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua" by "El comercio" - available here
- "Piropos y frases de amor en quechua" by "Soy Quechua" - available here
- "Corazón en quechua" by "Soy Quechua" - available here
- "Oraciones en Español traducidas a Quechua" by "Tatoeba" - available here
- "AmericasNLP 2021 Shared Task on Open Machine Translation" by "americasnlp2021" - available here
### Data cleaning
- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.
## Considerations for Using the Data
This is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.
## Team members
- Sara Benel
- Jose Vílchez | [
"# Spanish to Quechua",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Dataset Creation\n- Considerations for Using the Data\n- team members",
"## Dataset Description\n\nThis dataset is a recopilation of webs and others datasets that shows in dataset creation section. This contains translations from spanish (es) to Qechua of Ayacucho (qu).",
"## Dataset Structure",
"### Data Fields\n- es: The sentence in Spanish.\n- qu: The sentence in Quechua of Ayacucho.",
"### Data Splits\n- train: To train the model (102 747 sentences).\n- Validation: To validate the model during training (12 844 sentences).\n- test: To evaluate the model when the training is finished (12 843 sentences).",
"## Dataset Creation",
"### Source Data\nThis dataset has generated from:\n- \"Mundo Quechua\" by \"Ivan Acuña\" - available here\n- \"Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua\" by \"El comercio\" - available here\n- \"Piropos y frases de amor en quechua\" by \"Soy Quechua\" - available here\n- \"Corazón en quechua\" by \"Soy Quechua\" - available here\n- \"Oraciones en Español traducidas a Quechua\" by \"Tatoeba\" - available here\n- \"AmericasNLP 2021 Shared Task on Open Machine Translation\" by \"americasnlp2021\" - available here",
"### Data cleaning\n- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.",
"## Considerations for Using the Data\n\nThis is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.",
"## Team members\n- Sara Benel\n- Jose Vílchez"
] | [
"TAGS\n#task_categories-translation #language-Spanish #language-Quechua #region-us \n",
"# Spanish to Quechua",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Dataset Creation\n- Considerations for Using the Data\n- team members",
"## Dataset Description\n\nThis dataset is a recopilation of webs and others datasets that shows in dataset creation section. This contains translations from spanish (es) to Qechua of Ayacucho (qu).",
"## Dataset Structure",
"### Data Fields\n- es: The sentence in Spanish.\n- qu: The sentence in Quechua of Ayacucho.",
"### Data Splits\n- train: To train the model (102 747 sentences).\n- Validation: To validate the model during training (12 844 sentences).\n- test: To evaluate the model when the training is finished (12 843 sentences).",
"## Dataset Creation",
"### Source Data\nThis dataset has generated from:\n- \"Mundo Quechua\" by \"Ivan Acuña\" - available here\n- \"Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua\" by \"El comercio\" - available here\n- \"Piropos y frases de amor en quechua\" by \"Soy Quechua\" - available here\n- \"Corazón en quechua\" by \"Soy Quechua\" - available here\n- \"Oraciones en Español traducidas a Quechua\" by \"Tatoeba\" - available here\n- \"AmericasNLP 2021 Shared Task on Open Machine Translation\" by \"americasnlp2021\" - available here",
"### Data cleaning\n- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.",
"## Considerations for Using the Data\n\nThis is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.",
"## Team members\n- Sara Benel\n- Jose Vílchez"
] |
c1c124ba6da774db9f83a25fc3f2ee70aa4400d1 |
# Dataset Card for Architext
## Dataset Description
This is the raw training data used to train the Architext models referenced in "Architext: Language-Driven Generative Architecture Design" .
- **Homepage:** https://architext.design/
- **Paper:** https://arxiv.org/abs/2303.07519
- **Point of Contact:** Theodoros Galanos (https://twitter.com/TheodoreGalanos)
## Dataset Creation
The data were synthetically generated by a parametric design script in Grasshopper 3D, a virtual algorithmic environment in the design software Rhinoceros 3D.
## Considerations for Using the Data
The data describe once instance of architectural design, specifically layout generation for residential appartments. Even in that case, the data is limited in the possible shapes they can represent, size, and typologies. Additionally, the annotations used as language prompts to generate a design are restricted to automatically generated annotations based on layout characteristics (adjacency, typology, number of spaces).
### Licensing Information
The dataset is licensed under the Apache 2.0 license.
### Citation Information
If you use the dataset please cite:
```
@article{galanos2023architext,
title={Architext: Language-Driven Generative Architecture Design},
author={Galanos, Theodoros and Liapis, Antonios and Yannakakis, Georgios N},
journal={arXiv preprint arXiv:2303.07519},
year={2023}
}
``` | THEODOROS/Architext_v1 | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"architecture",
"architext",
"arxiv:2303.07519",
"region:us"
] | 2022-04-03T11:03:17+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "architext_v1", "tags": ["architecture", "architext"]} | 2023-05-21T06:19:23+00:00 | [
"2303.07519"
] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #architecture #architext #arxiv-2303.07519 #region-us
|
# Dataset Card for Architext
## Dataset Description
This is the raw training data used to train the Architext models referenced in "Architext: Language-Driven Generative Architecture Design" .
- Homepage: URL
- Paper: URL
- Point of Contact: Theodoros Galanos (URL
## Dataset Creation
The data were synthetically generated by a parametric design script in Grasshopper 3D, a virtual algorithmic environment in the design software Rhinoceros 3D.
## Considerations for Using the Data
The data describe once instance of architectural design, specifically layout generation for residential appartments. Even in that case, the data is limited in the possible shapes they can represent, size, and typologies. Additionally, the annotations used as language prompts to generate a design are restricted to automatically generated annotations based on layout characteristics (adjacency, typology, number of spaces).
### Licensing Information
The dataset is licensed under the Apache 2.0 license.
If you use the dataset please cite:
| [
"# Dataset Card for Architext",
"## Dataset Description\nThis is the raw training data used to train the Architext models referenced in \"Architext: Language-Driven Generative Architecture Design\" . \n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Theodoros Galanos (URL",
"## Dataset Creation\nThe data were synthetically generated by a parametric design script in Grasshopper 3D, a virtual algorithmic environment in the design software Rhinoceros 3D.",
"## Considerations for Using the Data\nThe data describe once instance of architectural design, specifically layout generation for residential appartments. Even in that case, the data is limited in the possible shapes they can represent, size, and typologies. Additionally, the annotations used as language prompts to generate a design are restricted to automatically generated annotations based on layout characteristics (adjacency, typology, number of spaces).",
"### Licensing Information\nThe dataset is licensed under the Apache 2.0 license.\n\n\n\nIf you use the dataset please cite:"
] | [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #architecture #architext #arxiv-2303.07519 #region-us \n",
"# Dataset Card for Architext",
"## Dataset Description\nThis is the raw training data used to train the Architext models referenced in \"Architext: Language-Driven Generative Architecture Design\" . \n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Theodoros Galanos (URL",
"## Dataset Creation\nThe data were synthetically generated by a parametric design script in Grasshopper 3D, a virtual algorithmic environment in the design software Rhinoceros 3D.",
"## Considerations for Using the Data\nThe data describe once instance of architectural design, specifically layout generation for residential appartments. Even in that case, the data is limited in the possible shapes they can represent, size, and typologies. Additionally, the annotations used as language prompts to generate a design are restricted to automatically generated annotations based on layout characteristics (adjacency, typology, number of spaces).",
"### Licensing Information\nThe dataset is licensed under the Apache 2.0 license.\n\n\n\nIf you use the dataset please cite:"
] |
b5abf557ea371d452dfe7e6847f9f1f2e6f31ef4 | # RockingFace
A distribution of Amp-Space Dataset of x/y pairs audio pairs of input signals and recorded outputs after being processed by an audio effect (amplifier, stompbox, studio tools, etc.) | narad/rockingface | [
"region:us"
] | 2022-04-03T11:21:33+00:00 | {} | 2022-10-03T22:34:40+00:00 | [] | [] | TAGS
#region-us
| # RockingFace
A distribution of Amp-Space Dataset of x/y pairs audio pairs of input signals and recorded outputs after being processed by an audio effect (amplifier, stompbox, studio tools, etc.) | [
"# RockingFace\n\nA distribution of Amp-Space Dataset of x/y pairs audio pairs of input signals and recorded outputs after being processed by an audio effect (amplifier, stompbox, studio tools, etc.)"
] | [
"TAGS\n#region-us \n",
"# RockingFace\n\nA distribution of Amp-Space Dataset of x/y pairs audio pairs of input signals and recorded outputs after being processed by an audio effect (amplifier, stompbox, studio tools, etc.)"
] |
aa4acbaa7537aa9ae6dc5447dc82e59146ec083e | A synthetic dataset for GAN experiments.
Created with a CLOOB Conditioned Latent Diffusion model (https://github.com/JD-P/cloob-latent-diffusion)
For each color in a list of standard CSS color names, a set of images was generated using the following command:
```
python cfg_sample.py --autoencoder autoencoder_kl_32x32x4\
--checkpoint yfcc-latent-diffusion-f8-e2-s250k.ckpt\
--method plms\
--cond-scale 1.0\
--seed 34\
--steps 25\
-n 36\
"A glass orb with {color} spacetime fire burning inside"
```
| johnowhitaker/colorbs | [
"region:us"
] | 2022-04-03T11:24:32+00:00 | {} | 2022-04-04T05:52:33+00:00 | [] | [] | TAGS
#region-us
| A synthetic dataset for GAN experiments.
Created with a CLOOB Conditioned Latent Diffusion model (URL
For each color in a list of standard CSS color names, a set of images was generated using the following command:
| [] | [
"TAGS\n#region-us \n"
] |
39816326bf8c3499e150a27e13336760e7c3d904 |
## Description
An adaptation of [eHealth-KD Challenge 2020 dataset](https://knowledge-learning.github.io/ehealthkd-2020/), filtered only for the task of NER. Some adaptation of the original dataset have been made:
- BIO annotations
- Errors fixing
- Overlapped entities has been processed as an unique entity
## Dataset loading
datasets = load_dataset('json', data_files={'train': ['@YOUR_PATH@/training_anns_bio.json'], 'testing': ['@YOUR_PATH@/testing_anns_bio.json'], 'validation': ['@YOUR_PATH@/development_anns_bio.json']}) | fmmolina/eHealth-KD-Adaptation | [
"license:afl-3.0",
"region:us"
] | 2022-04-03T13:04:06+00:00 | {"license": "afl-3.0"} | 2022-04-11T06:16:13+00:00 | [] | [] | TAGS
#license-afl-3.0 #region-us
|
## Description
An adaptation of eHealth-KD Challenge 2020 dataset, filtered only for the task of NER. Some adaptation of the original dataset have been made:
- BIO annotations
- Errors fixing
- Overlapped entities has been processed as an unique entity
## Dataset loading
datasets = load_dataset('json', data_files={'train': ['@YOUR_PATH@/training_anns_bio.json'], 'testing': ['@YOUR_PATH@/testing_anns_bio.json'], 'validation': ['@YOUR_PATH@/development_anns_bio.json']}) | [
"## Description\n\nAn adaptation of eHealth-KD Challenge 2020 dataset, filtered only for the task of NER. Some adaptation of the original dataset have been made:\n\n- BIO annotations\n- Errors fixing\n- Overlapped entities has been processed as an unique entity",
"## Dataset loading\n\ndatasets = load_dataset('json', data_files={'train': ['@YOUR_PATH@/training_anns_bio.json'], 'testing': ['@YOUR_PATH@/testing_anns_bio.json'], 'validation': ['@YOUR_PATH@/development_anns_bio.json']})"
] | [
"TAGS\n#license-afl-3.0 #region-us \n",
"## Description\n\nAn adaptation of eHealth-KD Challenge 2020 dataset, filtered only for the task of NER. Some adaptation of the original dataset have been made:\n\n- BIO annotations\n- Errors fixing\n- Overlapped entities has been processed as an unique entity",
"## Dataset loading\n\ndatasets = load_dataset('json', data_files={'train': ['@YOUR_PATH@/training_anns_bio.json'], 'testing': ['@YOUR_PATH@/testing_anns_bio.json'], 'validation': ['@YOUR_PATH@/development_anns_bio.json']})"
] |
193e876ac72cd6b2a7e1ec68ddec7915ef5ff324 |
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- [CAES corpus](http://galvan.usc.es/caes/) (Martínez et al., 2019): the "Corpus de Aprendices del Español" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.
### Languages
Spanish
## Dataset Structure
Texts are tokenized to create a paragraph-based dataset
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: simple or complex.
- **Level-3:** standardized readability level: basic, intermediate or advanced.
- **Text:** original text formatted into sentences.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
| hackathon-pln-es/readability-es-caes | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"readability",
"region:us"
] | 2022-04-03T20:42:19+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "readability-es-caes", "tags": ["readability"]} | 2023-04-13T07:51:40+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Spanish #license-cc-by-4.0 #readability #region-us
|
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- CAES corpus (Martínez et al., 2019): the "Corpus de Aprendices del Español" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.
### Languages
Spanish
## Dataset Structure
Texts are tokenized to create a paragraph-based dataset
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- Category: when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- Level: standardized readability level: simple or complex.
- Level-3: standardized readability level: basic, intermediate or advanced.
- Text: original text formatted into sentences.
## Additional Information
### Licensing Information
URL
Please cite this page to give credit to the authors :)
### Team
- Laura Vásquez-Rodríguez
- Pedro Cuenca
- Sergio Morales
- Fernando Alva-Manchego
| [
"# Dataset Card for [readability-es-caes]",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:\n- CAES corpus (Martínez et al., 2019): the \"Corpus de Aprendices del Español\" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.",
"### Languages\n\nSpanish",
"## Dataset Structure\n\nTexts are tokenized to create a paragraph-based dataset",
"### Data Fields\n\nThe dataset is formatted as a json lines and includes the following fields:\n- Category: when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).\n- Level: standardized readability level: simple or complex.\n- Level-3: standardized readability level: basic, intermediate or advanced.\n- Text: original text formatted into sentences.",
"## Additional Information",
"### Licensing Information\n\nURL\n\n\n\nPlease cite this page to give credit to the authors :)",
"### Team\n- Laura Vásquez-Rodríguez\n- Pedro Cuenca\n- Sergio Morales\n- Fernando Alva-Manchego"
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Spanish #license-cc-by-4.0 #readability #region-us \n",
"# Dataset Card for [readability-es-caes]",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:\n- CAES corpus (Martínez et al., 2019): the \"Corpus de Aprendices del Español\" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.",
"### Languages\n\nSpanish",
"## Dataset Structure\n\nTexts are tokenized to create a paragraph-based dataset",
"### Data Fields\n\nThe dataset is formatted as a json lines and includes the following fields:\n- Category: when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).\n- Level: standardized readability level: simple or complex.\n- Level-3: standardized readability level: basic, intermediate or advanced.\n- Text: original text formatted into sentences.",
"## Additional Information",
"### Licensing Information\n\nURL\n\n\n\nPlease cite this page to give credit to the authors :)",
"### Team\n- Laura Vásquez-Rodríguez\n- Pedro Cuenca\n- Sergio Morales\n- Fernando Alva-Manchego"
] |
ed0fe1b82f32972a3312f1b5b75c9f97650ce56e |
# Dataset Card of "unam_tesis"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
- [[email protected]](mailto:[email protected])
- [[email protected]](mailto:[email protected])
### Dataset Summary
El dataset unam_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.
### Supported Tasks and Leaderboards
text-classification
### Languages
Español (es)
## Dataset Structure
### Data Instances
Las instancias del dataset son de la siguiente forma:
El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría
| Carreras | Número de instancias |
|--------------|----------------------|
| Actuaría | 200 |
| Derecho| 200 |
| Economía| 200 |
| Psicología| 200 |
| Química Farmacéutico Biológica| 200 |
### Data Fields
El dataset está compuesto por los siguientes campos: "texto|titulo|carrera". <br/>
texto: Se refiere al texto de la introducción de la tesis. <br/>
titulo: Se refiere al título de la tesis. <br/>
carrera: Se refiere al nombre de la carrera a la que pertenece la tesis. <br/>
### Data Splits
El dataset tiene 2 particiones: entrenamiento (train) y prueba (test).
| Partición | Número de instancias |
|--------------|-------------------|
| Entrenamiento | 800 |
| Prueba | 200 |
## Dataset Creation
### Curation Rationale
La creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.
### Source Data
#### Initial Data Collection and Normalization
El dataset original (dataset_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01.
Se optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.
Para ello, en primer lugar se consultó la Oferta Académica (http://oferta.unam.mx/indice-alfabetico.html) de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.
Este scraper obtiene de esta base de datos:
- Nombres del Autor
- Apellidos del Autor
- Título de la Tesis
- Año de la Tesis
- Carrera de la Tesis
A la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el "Resumen/Introduccion/Conclusion de la tesis", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.
#### Who are the source language producers?
Los datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.
### Annotations
El dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: "texto|autor_nombre|autor_apellido|titulo|año|carrera".
#### Annotation process
Se extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.
Luego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset_tesis_procesado):
- convertir a minúsculas
- tokenización
- eliminar palabras que no son alfanuméricas
- eliminar palabras vacías
- stemming: eliminar plurales
#### Who are the annotators?
Las anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
El presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/).
### Discussion of Biases
El texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miembros del equipo (user de Hugging Face):
[Isacc Isahias López López](https://huggingface.co/MajorIsaiah)
[Yisel Clavel Quintero](https://huggingface.co/clavel)
[Dionis López](https://huggingface.co/inoid)
[Ximena Yeraldin López López](https://huggingface.co/Ximyer)
### Licensing Information
La versión 1.0.0 del dataset unam_tesis está liberada bajo la licencia <a href='http://www.apache.org/licenses/LICENSE-2.0'/> Apache-2.0 License </a>.
### Citation Information
"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: https://huggingface.co/hackathon-pln-es."
Para citar este dataset, por favor, use el siguiente formato de cita:
@inproceedings{Hackathon 2022 de PLN en Español,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Hackathon 2022 de PLN en Español},
year={2022}
}
### Contributions
Gracias a [@yiselclavel](https://github.com/yiselclavel) y [@IsaacIsaias](https://github.com/IsaacIsaias) por agregar este dataset.
| hackathon-pln-es/unam_tesis | [
"task_categories:text-classification",
"task_ids:language-modeling",
"annotations_creators:MajorIsaiah",
"annotations_creators:Ximyer",
"annotations_creators:clavel",
"annotations_creators:inoid",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n=200",
"source_datasets:original",
"language:es",
"license:apache-2.0",
"region:us"
] | 2022-04-03T22:25:31+00:00 | {"annotations_creators": ["MajorIsaiah", "Ximyer", "clavel", "inoid"], "language_creators": ["crowdsourced"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n=200"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["language-modeling"], "pretty_name": "UNAM Tesis"} | 2023-10-11T13:57:54+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #task_ids-language-modeling #annotations_creators-MajorIsaiah #annotations_creators-Ximyer #annotations_creators-clavel #annotations_creators-inoid #language_creators-crowdsourced #multilinguality-monolingual #size_categories-n=200 #source_datasets-original #language-Spanish #license-apache-2.0 #region-us
| Dataset Card of "unam\_tesis"
=============================
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
* yiselclavel@URL
* isaac7isaias@URL
### Dataset Summary
El dataset unam\_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.
### Supported Tasks and Leaderboards
text-classification
### Languages
Español (es)
Dataset Structure
-----------------
### Data Instances
Las instancias del dataset son de la siguiente forma:
```
El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría
```
### Data Fields
El dataset está compuesto por los siguientes campos: "texto|titulo|carrera".
texto: Se refiere al texto de la introducción de la tesis.
titulo: Se refiere al título de la tesis.
carrera: Se refiere al nombre de la carrera a la que pertenece la tesis.
### Data Splits
El dataset tiene 2 particiones: entrenamiento (train) y prueba (test).
Dataset Creation
----------------
### Curation Rationale
La creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.
### Source Data
#### Initial Data Collection and Normalization
El dataset original (dataset\_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: URL
Se optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.
Para ello, en primer lugar se consultó la Oferta Académica (URL de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.
Este scraper obtiene de esta base de datos:
* Nombres del Autor
* Apellidos del Autor
* Título de la Tesis
* Año de la Tesis
* Carrera de la Tesis
A la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el "Resumen/Introduccion/Conclusion de la tesis", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.
#### Who are the source language producers?
Los datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.
### Annotations
El dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: "texto|autor\_nombre|autor\_apellido|titulo|año|carrera".
#### Annotation process
Se extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.
Luego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset\_tesis\_procesado):
* convertir a minúsculas
* tokenización
* eliminar palabras que no son alfanuméricas
* eliminar palabras vacías
* stemming: eliminar plurales
#### Who are the annotators?
Las anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
El presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (URL
### Discussion of Biases
El texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Miembros del equipo (user de Hugging Face):
```
Isacc Isahias López López
Yisel Clavel Quintero
Dionis López
Ximena Yeraldin López López
```
### Licensing Information
La versión 1.0.0 del dataset unam\_tesis está liberada bajo la licencia <a href='URL Apache-2.0 License .
"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: URL
Para citar este dataset, por favor, use el siguiente formato de cita:
```
@inproceedings{Hackathon 2022 de PLN en Español,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Hackathon 2022 de PLN en Español},
year={2022}
}
```
### Contributions
Gracias a @yiselclavel y @IsaacIsaias por agregar este dataset.
| [
"### Dataset Summary\n\n\nEl dataset unam\\_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.",
"### Supported Tasks and Leaderboards\n\n\ntext-classification",
"### Languages\n\n\nEspañol (es)\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nLas instancias del dataset son de la siguiente forma:\n\n\n\n```\nEl objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría\n\n```",
"### Data Fields\n\n\nEl dataset está compuesto por los siguientes campos: \"texto|titulo|carrera\". \n\ntexto: Se refiere al texto de la introducción de la tesis. \n\ntitulo: Se refiere al título de la tesis. \n\ncarrera: Se refiere al nombre de la carrera a la que pertenece la tesis.",
"### Data Splits\n\n\nEl dataset tiene 2 particiones: entrenamiento (train) y prueba (test).\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nLa creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nEl dataset original (dataset\\_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: URL\n\n\nSe optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.\n\n\nPara ello, en primer lugar se consultó la Oferta Académica (URL de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.\n\n\nEste scraper obtiene de esta base de datos:\n\n\n* Nombres del Autor\n* Apellidos del Autor\n* Título de la Tesis\n* Año de la Tesis\n* Carrera de la Tesis\n\n\nA la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el \"Resumen/Introduccion/Conclusion de la tesis\", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.",
"#### Who are the source language producers?\n\n\nLos datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.",
"### Annotations\n\n\nEl dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: \"texto|autor\\_nombre|autor\\_apellido|titulo|año|carrera\".",
"#### Annotation process\n\n\nSe extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.\n\n\nLuego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset\\_tesis\\_procesado):\n\n\n* convertir a minúsculas\n* tokenización\n* eliminar palabras que no son alfanuméricas\n* eliminar palabras vacías\n* stemming: eliminar plurales",
"#### Who are the annotators?\n\n\nLas anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nEl presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (URL",
"### Discussion of Biases\n\n\nEl texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nMiembros del equipo (user de Hugging Face):\n\n\n\n```\nIsacc Isahias López López \nYisel Clavel Quintero\nDionis López\nXimena Yeraldin López López\n\n```",
"### Licensing Information\n\n\nLa versión 1.0.0 del dataset unam\\_tesis está liberada bajo la licencia <a href='URL Apache-2.0 License .\n\n\n\"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: URL\n\n\nPara citar este dataset, por favor, use el siguiente formato de cita:\n\n\n\n```\n@inproceedings{Hackathon 2022 de PLN en Español, \n title={UNAM's Theses with BETO fine-tuning classify}, \n author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin}, \n booktitle={Hackathon 2022 de PLN en Español}, \n year={2022}\n}\n\n```",
"### Contributions\n\n\nGracias a @yiselclavel y @IsaacIsaias por agregar este dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-language-modeling #annotations_creators-MajorIsaiah #annotations_creators-Ximyer #annotations_creators-clavel #annotations_creators-inoid #language_creators-crowdsourced #multilinguality-monolingual #size_categories-n=200 #source_datasets-original #language-Spanish #license-apache-2.0 #region-us \n",
"### Dataset Summary\n\n\nEl dataset unam\\_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.",
"### Supported Tasks and Leaderboards\n\n\ntext-classification",
"### Languages\n\n\nEspañol (es)\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nLas instancias del dataset son de la siguiente forma:\n\n\n\n```\nEl objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría\n\n```",
"### Data Fields\n\n\nEl dataset está compuesto por los siguientes campos: \"texto|titulo|carrera\". \n\ntexto: Se refiere al texto de la introducción de la tesis. \n\ntitulo: Se refiere al título de la tesis. \n\ncarrera: Se refiere al nombre de la carrera a la que pertenece la tesis.",
"### Data Splits\n\n\nEl dataset tiene 2 particiones: entrenamiento (train) y prueba (test).\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nLa creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nEl dataset original (dataset\\_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: URL\n\n\nSe optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.\n\n\nPara ello, en primer lugar se consultó la Oferta Académica (URL de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.\n\n\nEste scraper obtiene de esta base de datos:\n\n\n* Nombres del Autor\n* Apellidos del Autor\n* Título de la Tesis\n* Año de la Tesis\n* Carrera de la Tesis\n\n\nA la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el \"Resumen/Introduccion/Conclusion de la tesis\", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.",
"#### Who are the source language producers?\n\n\nLos datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.",
"### Annotations\n\n\nEl dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: \"texto|autor\\_nombre|autor\\_apellido|titulo|año|carrera\".",
"#### Annotation process\n\n\nSe extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.\n\n\nLuego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset\\_tesis\\_procesado):\n\n\n* convertir a minúsculas\n* tokenización\n* eliminar palabras que no son alfanuméricas\n* eliminar palabras vacías\n* stemming: eliminar plurales",
"#### Who are the annotators?\n\n\nLas anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nEl presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (URL",
"### Discussion of Biases\n\n\nEl texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nMiembros del equipo (user de Hugging Face):\n\n\n\n```\nIsacc Isahias López López \nYisel Clavel Quintero\nDionis López\nXimena Yeraldin López López\n\n```",
"### Licensing Information\n\n\nLa versión 1.0.0 del dataset unam\\_tesis está liberada bajo la licencia <a href='URL Apache-2.0 License .\n\n\n\"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: URL\n\n\nPara citar este dataset, por favor, use el siguiente formato de cita:\n\n\n\n```\n@inproceedings{Hackathon 2022 de PLN en Español, \n title={UNAM's Theses with BETO fine-tuning classify}, \n author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin}, \n booktitle={Hackathon 2022 de PLN en Español}, \n year={2022}\n}\n\n```",
"### Contributions\n\n\nGracias a @yiselclavel y @IsaacIsaias por agregar este dataset."
] |
e94ac1f1b72be4a83408f20a8d49ffd98e9724b1 | # Extracción de datos de Reddit
Se descargaron todos los titulos de los hilos de algunas comunidades en español de Reddit entre marzo del 2017 y enero del 2022:
| Comunidad | N° de hilos |
|----------------------------|-------------|
|AskRedditespanol | 28072 |
| BOLIVIA | 4935 |
| PERU | 20735 |
| argentina | 214986 |
| chile | 69077 |
|espanol | 39376 |
| mexico | 136984 |
| preguntaleareddit | 37300 |
| uruguay | 55693 |
| vzla | 42909 |
# Etiquetas
Luego, se etiquetaron manualmente algunos de los hilos para marcar AMA vs No AMA.
Se etiquetaron 757 hilos (AMA: 290, No AMA: 458), siguiendo una estrategia de query by committee.
En el archivo `etiqueta_ama.csv` se puede revisar esto.
Con estos 757 hilos se ejecuto un algoritmo de label spreading para identificar los hilos AMA restantes, esto dío un total de 3519 hilos.
En el archivo `autoetiquetado_ama.csv` se puede revisar esto.
Para identificar las profesiones de las personas que crearon los hilos se utilizó la siguiente lista:
https://raw.githubusercontent.com/davoclavo/adigmatangadijolachanga/master/profesiones.txt
Para lograr abarcar todas las posibilidades, se agregaron tanto las versiones que terminaban en "a" como en "o" de todas las profesiones.
Luego se agruparon las profesiones similares, para lograr un numero similar de hilos por profesión, para lo que se utilizo el siguiente diccionario:
```
sinonimos = {
'sexologo': 'psicologo',
'enfermero': 'medico',
'farmaceutico': 'medico',
'cirujano': 'medico',
'doctor': 'medico',
'radiologo': 'medico',
'dentista': 'odontologo',
'matron': 'medico',
'patologo': 'medico',
'educador': 'profesor',
'maestro': 'profesor',
'programador': 'ingeniero',
'informatico': 'ingeniero',
'juez': 'abogado',
'fiscal': 'abogado',
'oficial': 'abogado',
'astronomo': 'ciencias',
'fisico': 'ciencias',
'ecologo': 'ciencias',
'filosofo': 'ciencias',
'biologo': 'ciencias',
'zoologo': 'ciencias',
'quimico': 'ciencias',
'matematico': 'ciencias',
'meteorologo': 'ciencias',
'periodista': 'humanidades',
'dibujante': 'humanidades',
'fotografo': 'humanidades',
'traductor': 'humanidades',
'presidente': 'jefe',
'gerente': 'jefe'
}
```
Se descargaron todos los comentarios de los hilos AMA que contenian algunas de estas profesiones y luego se agruparon incluyendo solamente los que contenian algún signo de pregunta y que tuviesen una respuesta del autor del hilo, formando un par de pregunta respuesta.
Finalmente, se mantuvieron todas las profesiones que contenian más de 200 pares de pregunta respuesta, las que incluyen alrededor de 3000 pares pregunta respuesta.
En el archivo `qa_corpus_profesion.csv` se puede revisar esto. | hackathon-pln-es/ITAMA-DataSet | [
"region:us"
] | 2022-04-04T00:21:26+00:00 | {} | 2022-04-04T02:32:20+00:00 | [] | [] | TAGS
#region-us
| Extracción de datos de Reddit
=============================
Se descargaron todos los titulos de los hilos de algunas comunidades en español de Reddit entre marzo del 2017 y enero del 2022:
Etiquetas
=========
Luego, se etiquetaron manualmente algunos de los hilos para marcar AMA vs No AMA.
Se etiquetaron 757 hilos (AMA: 290, No AMA: 458), siguiendo una estrategia de query by committee.
En el archivo 'etiqueta\_ama.csv' se puede revisar esto.
Con estos 757 hilos se ejecuto un algoritmo de label spreading para identificar los hilos AMA restantes, esto dío un total de 3519 hilos.
En el archivo 'autoetiquetado\_ama.csv' se puede revisar esto.
Para identificar las profesiones de las personas que crearon los hilos se utilizó la siguiente lista:
URL
Para lograr abarcar todas las posibilidades, se agregaron tanto las versiones que terminaban en "a" como en "o" de todas las profesiones.
Luego se agruparon las profesiones similares, para lograr un numero similar de hilos por profesión, para lo que se utilizo el siguiente diccionario:
Se descargaron todos los comentarios de los hilos AMA que contenian algunas de estas profesiones y luego se agruparon incluyendo solamente los que contenian algún signo de pregunta y que tuviesen una respuesta del autor del hilo, formando un par de pregunta respuesta.
Finalmente, se mantuvieron todas las profesiones que contenian más de 200 pares de pregunta respuesta, las que incluyen alrededor de 3000 pares pregunta respuesta.
En el archivo 'qa\_corpus\_profesion.csv' se puede revisar esto.
| [] | [
"TAGS\n#region-us \n"
] |
66d3e93c84abc82d96ad84beb30bef404f0957ac | The Dataset was built on 2022/03/29 to contribute to improve the representation of the Spanish language in NLP tasks tasks in the HuggingFace platform.
The dataset contains 2,471 tweets obtained from their tweet_id. The dataset considers the following columns:
- Column 1( Status_id): Corresponds to the unique identification number of the tweet in the social network.
- Column 2( text): Corresponds to the text (in Spanish) linked to the corresponding "Status_Id", which is used to perform the sexism analysis.
- Column 3 (Category): Corresponds to the classification that has been made when analyzing the text (in Spanish), considering three categories: (SEXIST,NON_SEXIST,DOUBTFUL)
The dataset has been built thanks to the previous work of : F. Rodríguez-Sánchez, J. Carrillo-de-Albornoz and L. Plaza. from MeTwo Machismo and Sexism Twitter Identification dataset.
For more information on the categorization process check: https://ieeexplore.ieee.org/document/9281090 | ManRo/Sexism_Twitter_MeTwo | [
"license:apache-2.0",
"region:us"
] | 2022-04-04T01:15:01+00:00 | {"license": "apache-2.0"} | 2022-04-04T10:46:05+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| The Dataset was built on 2022/03/29 to contribute to improve the representation of the Spanish language in NLP tasks tasks in the HuggingFace platform.
The dataset contains 2,471 tweets obtained from their tweet_id. The dataset considers the following columns:
- Column 1( Status_id): Corresponds to the unique identification number of the tweet in the social network.
- Column 2( text): Corresponds to the text (in Spanish) linked to the corresponding "Status_Id", which is used to perform the sexism analysis.
- Column 3 (Category): Corresponds to the classification that has been made when analyzing the text (in Spanish), considering three categories: (SEXIST,NON_SEXIST,DOUBTFUL)
The dataset has been built thanks to the previous work of : F. Rodríguez-Sánchez, J. Carrillo-de-Albornoz and L. Plaza. from MeTwo Machismo and Sexism Twitter Identification dataset.
For more information on the categorization process check: URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
ad894516a8db0f6d292da5b7194b2729f47c02f9 |
Using Google Translation, we have translated SQuAD 2.0 dataset into multiple languages.
Here is the translated dataset of SQuAD 2.0 in French language.
Shared by [Pragnakalp Techlabs](https://www.pragnakalp.com) | pragnakalp/squad_v2_french_translated | [
"multilinguality:monolingual",
"multilinguality:translation",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2022-04-04T04:44:07+00:00 | {"language": "fr", "license": "apache-2.0", "multilinguality": ["monolingual", "translation"]} | 2022-08-29T06:49:15+00:00 | [] | [
"fr"
] | TAGS
#multilinguality-monolingual #multilinguality-translation #language-French #license-apache-2.0 #region-us
|
Using Google Translation, we have translated SQuAD 2.0 dataset into multiple languages.
Here is the translated dataset of SQuAD 2.0 in French language.
Shared by Pragnakalp Techlabs | [] | [
"TAGS\n#multilinguality-monolingual #multilinguality-translation #language-French #license-apache-2.0 #region-us \n"
] |
8d5f91d054aafc2a98eacfc2715c031113cd1bc0 | Kaggle based dataset for text classification task. The data has been cleaned and processed for preparation into any model for classification based tasks. This is just 40% of the entire dataset. | ikekobby/40-percent-cleaned-preprocessed-fake-real-news | [
"region:us"
] | 2022-04-04T08:26:47+00:00 | {} | 2022-04-04T08:41:40+00:00 | [] | [] | TAGS
#region-us
| Kaggle based dataset for text classification task. The data has been cleaned and processed for preparation into any model for classification based tasks. This is just 40% of the entire dataset. | [] | [
"TAGS\n#region-us \n"
] |
dae4dcc041f173bc7134be9d562d0f996693aa07 | # Neural Audio Fingerprint Dataset
(c) 2021 by Sungkyun Chang
https://github.com/mimbres/neural-audio-fp
This dataset includes all music sources, background noise and impulse-reponses
(IR) samples that have been used in the work ["Neural Audio Fingerprint for
High-specific Audio Retrieval based on Contrastive Learning"]
(https://arxiv.org/abs/2010.11910).
### Format:
16-bit PCM Mono WAV, Sampling rate 8000 Hz
### Description:
```
/
fingerprint_dataset_icassp2021/
├── aug
│ ├── bg <=== Pub/cafe etc. background noise mix
│ ├── ir <=== IR data for microphone and room reverb simulatio
│ └── speech <=== English conversation, NOT USED IN THE PAPER RESULT
├── extras
│ └── fma_info <=== Meta data for music sources.
└── music
├── test-dummy-db-100k-full <== 100K songs of full-lengths
├── test-query-db-500-30s <== 500 songs (30s) and 2K synthesized queries
├── train-10k-30s <== 10K songs (30s) for training
└── val-query-db-500-30s <== 500 songs (30s) for validation/mini-search
```
### Data source:
• Bacgkound noise from Audioset was retrieved using key words ['subway',
'metro', 'underground', 'not music']
• Cochlear.ai pub-noise was recorded at the Strabucks branches in Seoul by
Jeongsoo Park.
• Random noise was generated by Donmoon Lee.
• Room/space IR data was collected from Aachen IR and OpenAIR 1.4 dataset.
• Portions of MIC IRs were from Vintage MIC (http://recordinghacks.com/), and
pre-processed with room/space IR data.
• Portions of MIC IRs were recorded by Donmoon Lee, Jeonsu Park and Hyungui Lim
using mobile devices in the anechoic chamber at Seoul National University.
• All music sources were taken from the Free Music Archive (FMA) data set,
and converted from `stereo 44Khz` to `mono 8Khz`.
• train-10k-30s contains all 8K songs from FMA_small. The remaining 2K songs
were from FMA_medium.
• val- and test- data were isolated from train-, and taken from FMA_medium.
• test-query-db-500-30s/query consists of the pre-synthesized 2,000 queries
of 10s each (SNR 0~10dB) and their corresponding 500 songs of 30s each.
• Additionally, query_fixed_SNR directory contains synthesized queries with
fixed SNR of 0dB and -3dB.
• dummy-db-100k was taken from FMA_full, and duplicates with other sets were
removed.
### License:
This dataset is distributed under the CC BY-SA 2.0 license separately from the
github source code, and licenses for composites from other datasets are
attached to each sub-directory.
| arch-raven/music-fingerprint-dataset | [
"arxiv:2010.11910",
"region:us"
] | 2022-04-04T09:06:23+00:00 | {} | 2022-04-05T10:48:05+00:00 | [
"2010.11910"
] | [] | TAGS
#arxiv-2010.11910 #region-us
| # Neural Audio Fingerprint Dataset
(c) 2021 by Sungkyun Chang
URL
This dataset includes all music sources, background noise and impulse-reponses
(IR) samples that have been used in the work ["Neural Audio Fingerprint for
High-specific Audio Retrieval based on Contrastive Learning"]
(URL
### Format:
16-bit PCM Mono WAV, Sampling rate 8000 Hz
### Description:
### Data source:
• Bacgkound noise from Audioset was retrieved using key words ['subway',
'metro', 'underground', 'not music']
• URL pub-noise was recorded at the Strabucks branches in Seoul by
Jeongsoo Park.
• Random noise was generated by Donmoon Lee.
• Room/space IR data was collected from Aachen IR and OpenAIR 1.4 dataset.
• Portions of MIC IRs were from Vintage MIC (URL and
pre-processed with room/space IR data.
• Portions of MIC IRs were recorded by Donmoon Lee, Jeonsu Park and Hyungui Lim
using mobile devices in the anechoic chamber at Seoul National University.
• All music sources were taken from the Free Music Archive (FMA) data set,
and converted from 'stereo 44Khz' to 'mono 8Khz'.
• train-10k-30s contains all 8K songs from FMA_small. The remaining 2K songs
were from FMA_medium.
• val- and test- data were isolated from train-, and taken from FMA_medium.
• test-query-db-500-30s/query consists of the pre-synthesized 2,000 queries
of 10s each (SNR 0~10dB) and their corresponding 500 songs of 30s each.
• Additionally, query_fixed_SNR directory contains synthesized queries with
fixed SNR of 0dB and -3dB.
• dummy-db-100k was taken from FMA_full, and duplicates with other sets were
removed.
### License:
This dataset is distributed under the CC BY-SA 2.0 license separately from the
github source code, and licenses for composites from other datasets are
attached to each sub-directory.
| [
"# Neural Audio Fingerprint Dataset\n(c) 2021 by Sungkyun Chang\nURL\n\nThis dataset includes all music sources, background noise and impulse-reponses\n (IR) samples that have been used in the work [\"Neural Audio Fingerprint for\n High-specific Audio Retrieval based on Contrastive Learning\"]\n (URL",
"### Format:\n16-bit PCM Mono WAV, Sampling rate 8000 Hz",
"### Description:",
"### Data source:\n • Bacgkound noise from Audioset was retrieved using key words ['subway',\n 'metro', 'underground', 'not music']\n • URL pub-noise was recorded at the Strabucks branches in Seoul by\n Jeongsoo Park.\n • Random noise was generated by Donmoon Lee.\n • Room/space IR data was collected from Aachen IR and OpenAIR 1.4 dataset.\n • Portions of MIC IRs were from Vintage MIC (URL and\n pre-processed with room/space IR data.\n • Portions of MIC IRs were recorded by Donmoon Lee, Jeonsu Park and Hyungui Lim\n using mobile devices in the anechoic chamber at Seoul National University.\n • All music sources were taken from the Free Music Archive (FMA) data set, \n and converted from 'stereo 44Khz' to 'mono 8Khz'.\n • train-10k-30s contains all 8K songs from FMA_small. The remaining 2K songs\n were from FMA_medium.\n • val- and test- data were isolated from train-, and taken from FMA_medium.\n • test-query-db-500-30s/query consists of the pre-synthesized 2,000 queries\n of 10s each (SNR 0~10dB) and their corresponding 500 songs of 30s each.\n • Additionally, query_fixed_SNR directory contains synthesized queries with\n fixed SNR of 0dB and -3dB.\n • dummy-db-100k was taken from FMA_full, and duplicates with other sets were\n removed.",
"### License:\nThis dataset is distributed under the CC BY-SA 2.0 license separately from the\n github source code, and licenses for composites from other datasets are\n attached to each sub-directory."
] | [
"TAGS\n#arxiv-2010.11910 #region-us \n",
"# Neural Audio Fingerprint Dataset\n(c) 2021 by Sungkyun Chang\nURL\n\nThis dataset includes all music sources, background noise and impulse-reponses\n (IR) samples that have been used in the work [\"Neural Audio Fingerprint for\n High-specific Audio Retrieval based on Contrastive Learning\"]\n (URL",
"### Format:\n16-bit PCM Mono WAV, Sampling rate 8000 Hz",
"### Description:",
"### Data source:\n • Bacgkound noise from Audioset was retrieved using key words ['subway',\n 'metro', 'underground', 'not music']\n • URL pub-noise was recorded at the Strabucks branches in Seoul by\n Jeongsoo Park.\n • Random noise was generated by Donmoon Lee.\n • Room/space IR data was collected from Aachen IR and OpenAIR 1.4 dataset.\n • Portions of MIC IRs were from Vintage MIC (URL and\n pre-processed with room/space IR data.\n • Portions of MIC IRs were recorded by Donmoon Lee, Jeonsu Park and Hyungui Lim\n using mobile devices in the anechoic chamber at Seoul National University.\n • All music sources were taken from the Free Music Archive (FMA) data set, \n and converted from 'stereo 44Khz' to 'mono 8Khz'.\n • train-10k-30s contains all 8K songs from FMA_small. The remaining 2K songs\n were from FMA_medium.\n • val- and test- data were isolated from train-, and taken from FMA_medium.\n • test-query-db-500-30s/query consists of the pre-synthesized 2,000 queries\n of 10s each (SNR 0~10dB) and their corresponding 500 songs of 30s each.\n • Additionally, query_fixed_SNR directory contains synthesized queries with\n fixed SNR of 0dB and -3dB.\n • dummy-db-100k was taken from FMA_full, and duplicates with other sets were\n removed.",
"### License:\nThis dataset is distributed under the CC BY-SA 2.0 license separately from the\n github source code, and licenses for composites from other datasets are\n attached to each sub-directory."
] |
b53451cabd92c2481263cf09a14a670e1d8e6f8c |
# Dataset Card for [readability-es-sentences]
## Dataset Description
Compilation of short Spanish articles for readability assessment.
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- **Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016):** collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.
- **[kwiziq](https://www.kwiziq.com/):** a language learner assistant
- **[hablacultura.com](https://hablacultura.com/):** Spanish resources for students and teachers. We have downloaded the available content in their websites.
### Languages
Spanish
## Dataset Structure
The dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: complex or simple.
- **Level-3:** standardized readability level: basic, intermediate or advanced
- **Text:** original text formatted into sentences.
Not all the entries contain usable values for `category`, `level` and `level-3`, but all of them should contain at least one of `level`, `level-3`. When the corresponding information could not be derived, we use the special `"N/A"` value to indicate so.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
| hackathon-pln-es/readability-es-hackathon-pln-public | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"readability",
"region:us"
] | 2022-04-04T09:26:51+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "readability-es-sentences", "tags": ["readability"]} | 2023-04-13T07:51:15+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Spanish #license-cc-by-4.0 #readability #region-us
|
# Dataset Card for [readability-es-sentences]
## Dataset Description
Compilation of short Spanish articles for readability assessment.
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016): collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.
- kwiziq: a language learner assistant
- URL: Spanish resources for students and teachers. We have downloaded the available content in their websites.
### Languages
Spanish
## Dataset Structure
The dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- Category: when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- Level: standardized readability level: complex or simple.
- Level-3: standardized readability level: basic, intermediate or advanced
- Text: original text formatted into sentences.
Not all the entries contain usable values for 'category', 'level' and 'level-3', but all of them should contain at least one of 'level', 'level-3'. When the corresponding information could not be derived, we use the special '"N/A"' value to indicate so.
## Additional Information
### Licensing Information
URL
Please cite this page to give credit to the authors :)
### Team
- Laura Vásquez-Rodríguez
- Pedro Cuenca
- Sergio Morales
- Fernando Alva-Manchego
| [
"# Dataset Card for [readability-es-sentences]",
"## Dataset Description\n\nCompilation of short Spanish articles for readability assessment.",
"### Dataset Summary\n\nThis dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources: \n- Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016): collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.\n- kwiziq: a language learner assistant\n- URL: Spanish resources for students and teachers. We have downloaded the available content in their websites.",
"### Languages\n\nSpanish",
"## Dataset Structure\n\nThe dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.",
"### Data Fields\n\nThe dataset is formatted as a json lines and includes the following fields:\n- Category: when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).\n- Level: standardized readability level: complex or simple. \n- Level-3: standardized readability level: basic, intermediate or advanced\n- Text: original text formatted into sentences.\n\nNot all the entries contain usable values for 'category', 'level' and 'level-3', but all of them should contain at least one of 'level', 'level-3'. When the corresponding information could not be derived, we use the special '\"N/A\"' value to indicate so.",
"## Additional Information",
"### Licensing Information\n\nURL\n\n\n\nPlease cite this page to give credit to the authors :)",
"### Team\n- Laura Vásquez-Rodríguez\n- Pedro Cuenca\n- Sergio Morales\n- Fernando Alva-Manchego"
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Spanish #license-cc-by-4.0 #readability #region-us \n",
"# Dataset Card for [readability-es-sentences]",
"## Dataset Description\n\nCompilation of short Spanish articles for readability assessment.",
"### Dataset Summary\n\nThis dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources: \n- Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016): collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.\n- kwiziq: a language learner assistant\n- URL: Spanish resources for students and teachers. We have downloaded the available content in their websites.",
"### Languages\n\nSpanish",
"## Dataset Structure\n\nThe dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.",
"### Data Fields\n\nThe dataset is formatted as a json lines and includes the following fields:\n- Category: when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).\n- Level: standardized readability level: complex or simple. \n- Level-3: standardized readability level: basic, intermediate or advanced\n- Text: original text formatted into sentences.\n\nNot all the entries contain usable values for 'category', 'level' and 'level-3', but all of them should contain at least one of 'level', 'level-3'. When the corresponding information could not be derived, we use the special '\"N/A\"' value to indicate so.",
"## Additional Information",
"### Licensing Information\n\nURL\n\n\n\nPlease cite this page to give credit to the authors :)",
"### Team\n- Laura Vásquez-Rodríguez\n- Pedro Cuenca\n- Sergio Morales\n- Fernando Alva-Manchego"
] |
2e1b744445b279b21a6d1aeacfb3dff8d2acf7fa | This dataset contains images from iNaturalist of butterflies (superfamily Papilionoidea) with at least one fave. Check the descriptions - some images have a licence like CC-BY-NC and can't be used for commercial purposes.
The list of observations was exported from iNaturalist after a query similar to https://www.inaturalist.org/observations?place_id=any&popular&taxon_id=47224
The images were downloaded with img2dataset and uploaded to the huggingface hub by @johnowhitaker using this colab notebook: https://colab.research.google.com/drive/14qwFV_G4dh6evizzqHP08qDUAHtzfuiW?usp=sharing
The goal is to have a dataset of butterflies in different poses and settings, to use for GAN training and to compare with datasets built with museum collections of pinned specimens (which tend to be much cleaner and have more consistency of pose etc)
I'm not familiar with the nuances of creative commons licencing but you may wish to filter out images which are no-derivatices (CC-...-ND) when training a GAN or creating new images. | huggan/inat_butterflies | [
"region:us"
] | 2022-04-04T09:34:36+00:00 | {} | 2022-04-04T09:53:19+00:00 | [] | [] | TAGS
#region-us
| This dataset contains images from iNaturalist of butterflies (superfamily Papilionoidea) with at least one fave. Check the descriptions - some images have a licence like CC-BY-NC and can't be used for commercial purposes.
The list of observations was exported from iNaturalist after a query similar to URL
The images were downloaded with img2dataset and uploaded to the huggingface hub by @johnowhitaker using this colab notebook: URL
The goal is to have a dataset of butterflies in different poses and settings, to use for GAN training and to compare with datasets built with museum collections of pinned specimens (which tend to be much cleaner and have more consistency of pose etc)
I'm not familiar with the nuances of creative commons licencing but you may wish to filter out images which are no-derivatices (CC-...-ND) when training a GAN or creating new images. | [] | [
"TAGS\n#region-us \n"
] |
d73ccef8b255c317a226912071e92b272c55dc43 |
# Dataset Card for "huggingartists/olga-buzova"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.164278 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/efacbc8bb2d22ab78e494539bba61b3e.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/olga-buzova">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ольга Бузова (Olga Buzova)</div>
<a href="https://genius.com/artists/olga-buzova">
<div style="text-align: center; font-size: 14px;">@olga-buzova</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/olga-buzova).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/olga-buzova")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|66| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/olga-buzova")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/olga-buzova | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-04-04T10:18:31+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"], "models": ["huggingartists/olga-buzova"]} | 2022-10-25T09:03:54+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/olga-buzova"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 0.164278 MB
HuggingArtists Model
Ольга Бузова (Olga Buzova)
[@olga-buzova](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n
- [Dataset Summary](#dataset-summary)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
- **Point of Contact:** [Nart Tlisha](mailto:[email protected])
- **Size of the generated dataset:** 176 MB
### Dataset Summary
The Abkhaz language monolingual dataset is a collection of 1,470,480 sentences extracted from different sources. The dataset is available under the Creative Commons Universal Public Domain License. Part of it is also available as part of [Common Voice](https://commonvoice.mozilla.org/ab), another part is from the [Abkhaz National Corpus](https://clarino.uib.no/abnc)
## Dataset Creation
### Source Data
Here is a link to the source of a large part of the data on [github](https://github.com/danielinux7/Multilingual-Parallel-Corpus/blob/master/ebooks/reference.md)
## Considerations for Using the Data
### Other Known Limitations
The accuracy of the dataset is around 95% (gramatical, arthographical errors)
| Nart/abkhaz_text | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ab",
"license:cc0-1.0",
"region:us"
] | 2022-04-04T10:57:51+00:00 | {"language_creators": ["expert-generated"], "language": ["ab"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Abkhaz monolingual corpus"} | 2022-11-01T10:53:17+00:00 | [] | [
"ab"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Abkhazian #license-cc0-1.0 #region-us
|
# Dataset Card for "Abkhaz text"
## Table of Contents
- Dataset Description
- Dataset Summary
- Considerations for Using the Data
- Other Known Limitations
## Dataset Description
- Point of Contact: Nart Tlisha
- Size of the generated dataset: 176 MB
### Dataset Summary
The Abkhaz language monolingual dataset is a collection of 1,470,480 sentences extracted from different sources. The dataset is available under the Creative Commons Universal Public Domain License. Part of it is also available as part of Common Voice, another part is from the Abkhaz National Corpus
## Dataset Creation
### Source Data
Here is a link to the source of a large part of the data on github
## Considerations for Using the Data
### Other Known Limitations
The accuracy of the dataset is around 95% (gramatical, arthographical errors)
| [
"# Dataset Card for \"Abkhaz text\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n- Considerations for Using the Data\n - Other Known Limitations",
"## Dataset Description\n- Point of Contact: Nart Tlisha\n- Size of the generated dataset: 176 MB",
"### Dataset Summary\n\nThe Abkhaz language monolingual dataset is a collection of 1,470,480 sentences extracted from different sources. The dataset is available under the Creative Commons Universal Public Domain License. Part of it is also available as part of Common Voice, another part is from the Abkhaz National Corpus",
"## Dataset Creation",
"### Source Data\nHere is a link to the source of a large part of the data on github",
"## Considerations for Using the Data",
"### Other Known Limitations\nThe accuracy of the dataset is around 95% (gramatical, arthographical errors)"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Abkhazian #license-cc0-1.0 #region-us \n",
"# Dataset Card for \"Abkhaz text\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n- Considerations for Using the Data\n - Other Known Limitations",
"## Dataset Description\n- Point of Contact: Nart Tlisha\n- Size of the generated dataset: 176 MB",
"### Dataset Summary\n\nThe Abkhaz language monolingual dataset is a collection of 1,470,480 sentences extracted from different sources. The dataset is available under the Creative Commons Universal Public Domain License. Part of it is also available as part of Common Voice, another part is from the Abkhaz National Corpus",
"## Dataset Creation",
"### Source Data\nHere is a link to the source of a large part of the data on github",
"## Considerations for Using the Data",
"### Other Known Limitations\nThe accuracy of the dataset is around 95% (gramatical, arthographical errors)"
] |
49f91f486696456ead1685e46fbd63e6520f2537 | Filtered version of https://huggingface.co/datasets/huggan/inat_butterflies
To pick the best images, CLIP was used to compare each image with a text description of a good image ("")
Notebook for the filtering: https://colab.research.google.com/drive/1OEqr1TtL4YJhdj_bebNWXRuG3f2YqtQE?usp=sharing
See the original dataset for sources and licence caveats (tl;dr check the image descriptions to make sure you aren't breaking a licence like CC-BY-NC-ND which some images have) | huggan/inat_butterflies_top10k | [
"region:us"
] | 2022-04-04T11:45:06+00:00 | {} | 2022-04-04T11:50:28+00:00 | [] | [] | TAGS
#region-us
| Filtered version of URL
To pick the best images, CLIP was used to compare each image with a text description of a good image ("")
Notebook for the filtering: URL
See the original dataset for sources and licence caveats (tl;dr check the image descriptions to make sure you aren't breaking a licence like CC-BY-NC-ND which some images have) | [] | [
"TAGS\n#region-us \n"
] |
596623eb34923ccd0eb540ea1f737cd09c304e58 |
# Dataset Description
## Dataset Summary
This dataset was parsed from the Human-HIV Interaction dataset maintained by the NCBI.
It contains a >16,000 pairs of interactions between HIV and Human proteins.
Sequences of the interacting proteins were retrieved from the NCBI protein database and added to the dataset.
The raw data is available from the [NBCI FTP site](https://ftp.ncbi.nlm.nih.gov/gene/GeneRIF/hiv_interactions.gz) and the curation strategy is described in the [NAR Research paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4383939/) announcing the dataset.
## Dataset Structure
### Data Instances
Data Fields: hiv_protein_product, hiv_protein_name, interaction_type, human_protein_product, human_protein_name, reference_list, description, hiv_protein_sequence, human_protein_sequence
Data Splits: None
## Dataset Creation
Curation Rationale: This dataset was curated train models to recognize proteins that interact with HIV.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 4/4/2022 but the most recent update of the underlying NCBI database was 2016.
## Considerations for Using the Data
Discussion of Biases: This dataset of protein interactions was manually curated by experts utilizing published scientific literature.
This inherently biases the collection to well-studied proteins and known interactions.
The dataset does not contain _negative_ interactions.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA | damlab/human_hiv_ppi | [
"license:mit",
"region:us"
] | 2022-04-04T13:24:30+00:00 | {"license": "mit"} | 2022-04-04T13:38:49+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Dataset Description
## Dataset Summary
This dataset was parsed from the Human-HIV Interaction dataset maintained by the NCBI.
It contains a >16,000 pairs of interactions between HIV and Human proteins.
Sequences of the interacting proteins were retrieved from the NCBI protein database and added to the dataset.
The raw data is available from the NBCI FTP site and the curation strategy is described in the NAR Research paper announcing the dataset.
## Dataset Structure
### Data Instances
Data Fields: hiv_protein_product, hiv_protein_name, interaction_type, human_protein_product, human_protein_name, reference_list, description, hiv_protein_sequence, human_protein_sequence
Data Splits: None
## Dataset Creation
Curation Rationale: This dataset was curated train models to recognize proteins that interact with HIV.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 4/4/2022 but the most recent update of the underlying NCBI database was 2016.
## Considerations for Using the Data
Discussion of Biases: This dataset of protein interactions was manually curated by experts utilizing published scientific literature.
This inherently biases the collection to well-studied proteins and known interactions.
The dataset does not contain _negative_ interactions.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA | [
"# Dataset Description",
"## Dataset Summary\n\nThis dataset was parsed from the Human-HIV Interaction dataset maintained by the NCBI. \nIt contains a >16,000 pairs of interactions between HIV and Human proteins.\nSequences of the interacting proteins were retrieved from the NCBI protein database and added to the dataset.\nThe raw data is available from the NBCI FTP site and the curation strategy is described in the NAR Research paper announcing the dataset.",
"## Dataset Structure",
"### Data Instances\n\nData Fields: hiv_protein_product, hiv_protein_name, interaction_type, human_protein_product, human_protein_name, reference_list, description, hiv_protein_sequence, human_protein_sequence\n\nData Splits: None",
"## Dataset Creation\n\nCuration Rationale: This dataset was curated train models to recognize proteins that interact with HIV.\n\nInitial Data Collection and Normalization: Dataset was downloaded and curated on 4/4/2022 but the most recent update of the underlying NCBI database was 2016.",
"## Considerations for Using the Data\n\nDiscussion of Biases: This dataset of protein interactions was manually curated by experts utilizing published scientific literature.\nThis inherently biases the collection to well-studied proteins and known interactions.\nThe dataset does not contain _negative_ interactions.",
"## Additional Information: \n - Dataset Curators: Will Dampier \n - Citation Information: TBA"
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Description",
"## Dataset Summary\n\nThis dataset was parsed from the Human-HIV Interaction dataset maintained by the NCBI. \nIt contains a >16,000 pairs of interactions between HIV and Human proteins.\nSequences of the interacting proteins were retrieved from the NCBI protein database and added to the dataset.\nThe raw data is available from the NBCI FTP site and the curation strategy is described in the NAR Research paper announcing the dataset.",
"## Dataset Structure",
"### Data Instances\n\nData Fields: hiv_protein_product, hiv_protein_name, interaction_type, human_protein_product, human_protein_name, reference_list, description, hiv_protein_sequence, human_protein_sequence\n\nData Splits: None",
"## Dataset Creation\n\nCuration Rationale: This dataset was curated train models to recognize proteins that interact with HIV.\n\nInitial Data Collection and Normalization: Dataset was downloaded and curated on 4/4/2022 but the most recent update of the underlying NCBI database was 2016.",
"## Considerations for Using the Data\n\nDiscussion of Biases: This dataset of protein interactions was manually curated by experts utilizing published scientific literature.\nThis inherently biases the collection to well-studied proteins and known interactions.\nThe dataset does not contain _negative_ interactions.",
"## Additional Information: \n - Dataset Curators: Will Dampier \n - Citation Information: TBA"
] |
484a5ad065c06cb4e04333ed4e4947a7e0373192 |
Collection of pinned butterfly images from the Smithsonian https://www.si.edu/spotlight/buginfo/butterfly
Doesn't include metadata yet!
Url pattern: "https://ids.si.edu/ids/deliveryService?max_w=550&id=ark:/65665/m3c70e17cf30314fd4ad86afa7d1ebf49f"
Added sketch versions!
sketch_pidinet is generated by : https://github.com/zhuoinoulu/pidinet
sketch_pix2pix is generated by : https://github.com/mtli/PhotoSketch
| huggan/smithsonian-butterfly-lowres | [
"license:cc0-1.0",
"region:us"
] | 2022-04-04T17:45:28+00:00 | {"license": "cc0-1.0"} | 2022-04-06T18:57:24+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
|
Collection of pinned butterfly images from the Smithsonian URL
Doesn't include metadata yet!
Url pattern: "URL
Added sketch versions!
sketch_pidinet is generated by : URL
sketch_pix2pix is generated by : URL
| [] | [
"TAGS\n#license-cc0-1.0 #region-us \n"
] |
dd2d9cbe7ba3139d1f48096e3f19ce2eba4d27eb |
# Dataset Card for the-reddit-dataset-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-dataset-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
### Dataset Summary
A meta dataset of Reddit's own /r/datasets community.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/the-reddit-dataset-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-04T19:47:35+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"]} | 2022-07-01T16:55:48+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for the-reddit-dataset-dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Licensing Information
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
A meta dataset of Reddit's own /r/datasets community.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
"# Dataset Card for the-reddit-dataset-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nA meta dataset of Reddit's own /r/datasets community.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for the-reddit-dataset-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nA meta dataset of Reddit's own /r/datasets community.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] |
c50846883a030dd8930ee5788524902b10439b63 |
# Dataset Card for JetClass
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jet-universe/particle_transformer
- **Paper:** https://arxiv.org/abs/2202.03772
- **Leaderboard:**
- **Point of Contact:** [Huilin Qu](mailto:[email protected])
### Dataset Summary
JetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:
* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the
LHC.
* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.
Jets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance
parameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use the JetClass dataset, please cite:
```
@article{Qu:2022mxj,
author = "Qu, Huilin and Li, Congqiao and Qian, Sitian",
title = "{Particle Transformer for Jet Tagging}",
eprint = "2202.03772",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
month = "2",
year = "2022"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
| jet-universe/jetclass | [
"license:mit",
"arxiv:2202.03772",
"region:us"
] | 2022-04-05T06:32:22+00:00 | {"license": "mit"} | 2022-05-27T18:00:45+00:00 | [
"2202.03772"
] | [] | TAGS
#license-mit #arxiv-2202.03772 #region-us
|
# Dataset Card for JetClass
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Initial Data Collection and Normalization
- Who are the source language producers?
- Annotations
- Annotation process
- Who are the annotators?
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact: Huilin Qu
### Dataset Summary
JetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:
* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the
LHC.
* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.
Jets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance
parameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
If you use the JetClass dataset, please cite:
### Contributions
Thanks to @lewtun for adding this dataset.
| [
"# Dataset Card for JetClass",
"## Table of Contents\r\n- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)\r\n - Table of Contents\r\n - Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks and Leaderboards\r\n - Languages\r\n - Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n - Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Initial Data Collection and Normalization\r\n - Who are the source language producers?\r\n - Annotations\r\n - Annotation process\r\n - Who are the annotators?\r\n - Personal and Sensitive Information\r\n - Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n - Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information\r\n - Contributions",
"## Dataset Description\r\n\r\n- Homepage:\r\n- Repository: URL\r\n- Paper: URL\r\n- Leaderboard:\r\n- Point of Contact: Huilin Qu",
"### Dataset Summary\r\n\r\nJetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:\r\n\r\n* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the\r\nLHC.\r\n* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.\r\n\r\nJets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance\r\nparameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\r\n\r\n\r\n\r\n\r\n\r\nIf you use the JetClass dataset, please cite:",
"### Contributions\r\n\r\nThanks to @lewtun for adding this dataset."
] | [
"TAGS\n#license-mit #arxiv-2202.03772 #region-us \n",
"# Dataset Card for JetClass",
"## Table of Contents\r\n- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)\r\n - Table of Contents\r\n - Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks and Leaderboards\r\n - Languages\r\n - Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n - Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Initial Data Collection and Normalization\r\n - Who are the source language producers?\r\n - Annotations\r\n - Annotation process\r\n - Who are the annotators?\r\n - Personal and Sensitive Information\r\n - Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n - Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information\r\n - Contributions",
"## Dataset Description\r\n\r\n- Homepage:\r\n- Repository: URL\r\n- Paper: URL\r\n- Leaderboard:\r\n- Point of Contact: Huilin Qu",
"### Dataset Summary\r\n\r\nJetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:\r\n\r\n* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the\r\nLHC.\r\n* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.\r\n\r\nJets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance\r\nparameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\r\n\r\n\r\n\r\n\r\n\r\nIf you use the JetClass dataset, please cite:",
"### Contributions\r\n\r\nThanks to @lewtun for adding this dataset."
] |
8a7e41314267b68ddb15d3c9da012b9c98bf2a78 |
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of disk used:** ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
```py
from datasets import load_dataset
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("PolyAI/all", "all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
## Dataset Structure
We show detailed information the example configurations `fr-FR` of the dataset.
All other configurations have the same structure.
### Data Instances
**fr-FR**
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config `fr-FR` looks as follows:
```
{
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"audio": {
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"array": array(
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
),
"sampling_rate": 8000,
},
"transcription": "je souhaite changer mon adresse",
"english_transcription": "I want to change my address",
"intent_class": 1,
"lang_id": 6,
}
```
### Data Fields
The data fields are the same among all splits.
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **transcription** (str): Transcription of the audio file
- **english_transcription** (str): English transcription of the audio file
- **intent_class** (int): Class id of intent
- **lang_id** (int): Id of language
### Data Splits
Every config only has the `"train"` split containing of *ca.* 600 examples.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@article{DBLP:journals/corr/abs-2104-08524,
author = {Daniela Gerz and
Pei{-}Hao Su and
Razvan Kusztos and
Avishek Mondal and
Michal Lis and
Eshan Singhal and
Nikola Mrksic and
Tsung{-}Hsien Wen and
Ivan Vulic},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
journal = {CoRR},
volume = {abs/2104.08524},
year = {2021},
url = {https://arxiv.org/abs/2104.08524},
eprinttype = {arXiv},
eprint = {2104.08524},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
| PolyAI/minds14 | [
"task_categories:automatic-speech-recognition",
"task_ids:keyword-spotting",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:en",
"language:fr",
"language:it",
"language:es",
"language:pt",
"language:de",
"language:nl",
"language:ru",
"language:pl",
"language:cs",
"language:ko",
"language:zh",
"license:cc-by-4.0",
"arxiv:2104.08524",
"region:us"
] | 2022-04-05T06:46:13+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en", "fr", "it", "es", "pt", "de", "nl", "ru", "pl", "cs", "ko", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "task_categories": ["automatic-speech-recognition", "speech-processing"], "task_ids": ["speech-recognition", "keyword-spotting"], "pretty_name": "MInDS-14", "language_bcp47": ["en", "en-GB", "en-US", "en-AU", "fr", "it", "es", "pt", "de", "nl", "ru", "pl", "cs", "ko", "zh"]} | 2024-01-22T23:15:06+00:00 | [
"2104.08524"
] | [
"en",
"fr",
"it",
"es",
"pt",
"de",
"nl",
"ru",
"pl",
"cs",
"ko",
"zh"
] | TAGS
#task_categories-automatic-speech-recognition #task_ids-keyword-spotting #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-English #language-French #language-Italian #language-Spanish #language-Portuguese #language-German #language-Dutch #language-Russian #language-Polish #language-Czech #language-Korean #language-Chinese #license-cc-by-4.0 #arxiv-2104.08524 #region-us
|
# MInDS-14
## Dataset Description
- Fine-Tuning script: pytorch/audio-classification
- Paper: Multilingual and Cross-Lingual Intent Detection from Spoken Data
- Total amount of disk used: ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
## Dataset Structure
We show detailed information the example configurations 'fr-FR' of the dataset.
All other configurations have the same structure.
### Data Instances
fr-FR
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config 'fr-FR' looks as follows:
### Data Fields
The data fields are the same among all splits.
- path (str): Path to the audio file
- audio (dict): Audio object including loaded audio array, sampling rate and path ot audio
- transcription (str): Transcription of the audio file
- english_transcription (str): English transcription of the audio file
- intent_class (int): Class id of intent
- lang_id (int): Id of language
### Data Splits
Every config only has the '"train"' split containing of *ca.* 600 examples.
## Dataset Creation
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
All datasets are licensed under the Creative Commons license (CC-BY).
### Contributions
Thanks to @patrickvonplaten for adding this dataset
| [
"# MInDS-14",
"## Dataset Description\n\n- Fine-Tuning script: pytorch/audio-classification\n- Paper: Multilingual and Cross-Lingual Intent Detection from Spoken Data\n- Total amount of disk used: ca. 500 MB \n\nMINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14 \nintents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.",
"## Example\n\nMInDS-14 can be downloaded and used as follows:",
"## Dataset Structure\n\nWe show detailed information the example configurations 'fr-FR' of the dataset.\nAll other configurations have the same structure.",
"### Data Instances\n\nfr-FR\n\n- Size of downloaded dataset files: 471 MB\n- Size of the generated dataset: 300 KB\n- Total amount of disk used: 471 MB\n\n\nAn example of a datainstance of the config 'fr-FR' looks as follows:",
"### Data Fields\nThe data fields are the same among all splits.\n\n- path (str): Path to the audio file\n- audio (dict): Audio object including loaded audio array, sampling rate and path ot audio\n- transcription (str): Transcription of the audio file\n- english_transcription (str): English transcription of the audio file\n- intent_class (int): Class id of intent\n- lang_id (int): Id of language",
"### Data Splits\nEvery config only has the '\"train\"' split containing of *ca.* 600 examples.",
"## Dataset Creation",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).",
"### Contributions\n\nThanks to @patrickvonplaten for adding this dataset"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_ids-keyword-spotting #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-English #language-French #language-Italian #language-Spanish #language-Portuguese #language-German #language-Dutch #language-Russian #language-Polish #language-Czech #language-Korean #language-Chinese #license-cc-by-4.0 #arxiv-2104.08524 #region-us \n",
"# MInDS-14",
"## Dataset Description\n\n- Fine-Tuning script: pytorch/audio-classification\n- Paper: Multilingual and Cross-Lingual Intent Detection from Spoken Data\n- Total amount of disk used: ca. 500 MB \n\nMINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14 \nintents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.",
"## Example\n\nMInDS-14 can be downloaded and used as follows:",
"## Dataset Structure\n\nWe show detailed information the example configurations 'fr-FR' of the dataset.\nAll other configurations have the same structure.",
"### Data Instances\n\nfr-FR\n\n- Size of downloaded dataset files: 471 MB\n- Size of the generated dataset: 300 KB\n- Total amount of disk used: 471 MB\n\n\nAn example of a datainstance of the config 'fr-FR' looks as follows:",
"### Data Fields\nThe data fields are the same among all splits.\n\n- path (str): Path to the audio file\n- audio (dict): Audio object including loaded audio array, sampling rate and path ot audio\n- transcription (str): Transcription of the audio file\n- english_transcription (str): English transcription of the audio file\n- intent_class (int): Class id of intent\n- lang_id (int): Id of language",
"### Data Splits\nEvery config only has the '\"train\"' split containing of *ca.* 600 examples.",
"## Dataset Creation",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).",
"### Contributions\n\nThanks to @patrickvonplaten for adding this dataset"
] |
6342d0716fac4e248c53a27039c7d30ccaa9342b | # AutoTrain Dataset for project: sentiment_analysis_project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sentiment_analysis_project.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Realizing that I don`t have school today... or tomorrow... or for the next few months. I really nee[...]",
"target": 1
},
{
"text": "Good morning tweeps. Busy this a.m. but not in a working way",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['negative', 'neutral', 'positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16180 |
| valid | 4047 |
| ramnika003/autotrain-data-sentiment_analysis_project | [
"task_categories:text-classification",
"region:us"
] | 2022-04-05T08:13:43+00:00 | {"task_categories": ["text-classification"]} | 2022-04-05T08:16:59+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: sentiment\_analysis\_project
===========================================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project sentiment\_analysis\_project.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
8ec4ba6640805906d0c61886e65810c8ee78a982 |
# Dataset Card for the-reddit-place-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-place-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
### Dataset Summary
The written history or /r/Place, in posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/the-reddit-place-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-05T20:25:45+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T16:51:57+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for the-reddit-place-dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Licensing Information
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
The written history or /r/Place, in posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
"# Dataset Card for the-reddit-place-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThe written history or /r/Place, in posts and comments.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for the-reddit-place-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThe written history or /r/Place, in posts and comments.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] |
66f430a1252ea1732413a80a56a1b6e8bc74264e |
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley ([email protected]).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: advertissement
1: budget
2: email
3: file folder
4: form
5: handwritten
6: invoice
7: letter
8: memo
9: news article
10: presentation
11: questionnaire
12: resume
13: scientific publication
14: scientific report
15: specification
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/.
| chainyo/rvl-cdip | [
"license:other",
"region:us"
] | 2022-04-06T06:06:56+00:00 | {"license": "other"} | 2022-04-06T15:49:20+00:00 | [] | [] | TAGS
#license-other #region-us
|
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@URL).
The full dataset can be found here.
## Labels
0: advertissement
1: budget
2: email
3: file folder
4: form
5: handwritten
6: invoice
7: letter
8: memo
9: news article
10: presentation
11: questionnaire
12: resume
13: scientific publication
14: scientific report
15: specification
This dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015'
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL
| [
"## Labels\r\n\r\n 0: advertissement \r\n 1: budget \r\n 2: email \r\n 3: file folder \r\n 4: form \r\n 5: handwritten \r\n 6: invoice \r\n 7: letter \r\n 8: memo \r\n 9: news article \r\n10: presentation \r\n11: questionnaire \r\n12: resume \r\n13: scientific publication \r\n14: scientific report \r\n15: specification \r\n\r\nThis dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, \"Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval,\" in ICDAR, 2015'",
"## License\r\n\r\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.",
"## References\r\n\r\n1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, \"Building a test collection for complex document information processing,\" in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006\r\n2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL"
] | [
"TAGS\n#license-other #region-us \n",
"## Labels\r\n\r\n 0: advertissement \r\n 1: budget \r\n 2: email \r\n 3: file folder \r\n 4: form \r\n 5: handwritten \r\n 6: invoice \r\n 7: letter \r\n 8: memo \r\n 9: news article \r\n10: presentation \r\n11: questionnaire \r\n12: resume \r\n13: scientific publication \r\n14: scientific report \r\n15: specification \r\n\r\nThis dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, \"Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval,\" in ICDAR, 2015'",
"## License\r\n\r\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.",
"## References\r\n\r\n1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, \"Building a test collection for complex document information processing,\" in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006\r\n2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL"
] |
b646090ef0d09981da9c9765c4d376b407aa5955 |
# An Amharic News Text classification Dataset
> In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments.
```
@misc{https://doi.org/10.48550/arxiv.2103.05639,
doi = {10.48550/ARXIV.2103.05639},
url = {https://arxiv.org/abs/2103.05639},
author = {Azime, Israel Abebe and Mohammed, Nebil},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {An Amharic News Text classification Dataset},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| israel/Amharic-News-Text-classification-Dataset | [
"license:cc-by-4.0",
"arxiv:2103.05639",
"region:us"
] | 2022-04-06T08:20:35+00:00 | {"license": "cc-by-4.0"} | 2022-04-06T08:27:52+00:00 | [
"2103.05639"
] | [] | TAGS
#license-cc-by-4.0 #arxiv-2103.05639 #region-us
|
# An Amharic News Text classification Dataset
> In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments.
| [
"# An Amharic News Text classification Dataset \r\n\r\n> In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments."
] | [
"TAGS\n#license-cc-by-4.0 #arxiv-2103.05639 #region-us \n",
"# An Amharic News Text classification Dataset \r\n\r\n> In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments."
] |
d559852d2b232e0fcf195e775866964f0564f2b5 |
## Dataset Description
- **Homepage:** https://www.wikiart.org/
### Dataset Summary
Dataset containing 81,444 pieces of visual art from various artists, taken from WikiArt.org,
along with class labels for each image :
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
On WikiArt.org, the description for the "Artworks by Genre" page reads :
A genre system divides artworks according to depicted themes and objects. A classical hierarchy of genres was developed in European culture by the 17th century. It ranked genres in high – history painting and portrait, - and low – genre painting, landscape and still life. This hierarchy was based on the notion of man as the measure of all things. Landscape and still life were the lowest because they did not involve human subject matter. History was highest because it dealt with the noblest events of humanity. Genre system is not so much relevant for a contemporary art; there are just two genre definitions that are usually applied to it: abstract or figurative.
The "Artworks by Style" page reads :
A style of an artwork refers to its distinctive visual elements, techniques and methods. It usually corresponds with an art movement or a school (group) that its author is associated with.
## Dataset Structure
* "image" : image
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
### Source Data
Files taken from this [archive](https://archive.org/download/wikiart-dataset/wikiart.tar.gz), curated from the [WikiArt website](https://www.wikiart.org/).
## Additional Information
Note:
* The WikiArt dataset can be used only for non-commercial research purpose.
* The images in the WikiArt dataset were obtained from WikiArt.org.
* The authors are neither responsible for the content nor the meaning of these images.
By using the WikiArt dataset, you agree to obey the terms and conditions of WikiArt.org.
### Contributions
[`gigant`](https://huggingface.co/gigant) added this dataset to the hub. | huggan/wikiart | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"license:unknown",
"art",
"region:us"
] | 2022-04-06T08:40:18+00:00 | {"license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification", "text-to-image", "image-to-text"], "license_details": "Data files \u00a9 Original Authors", "tags": ["art"]} | 2023-03-22T13:56:08+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-text-to-image #task_categories-image-to-text #size_categories-10K<n<100K #license-unknown #art #region-us
|
## Dataset Description
- Homepage: URL
### Dataset Summary
Dataset containing 81,444 pieces of visual art from various artists, taken from URL,
along with class labels for each image :
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
On URL, the description for the "Artworks by Genre" page reads :
A genre system divides artworks according to depicted themes and objects. A classical hierarchy of genres was developed in European culture by the 17th century. It ranked genres in high – history painting and portrait, - and low – genre painting, landscape and still life. This hierarchy was based on the notion of man as the measure of all things. Landscape and still life were the lowest because they did not involve human subject matter. History was highest because it dealt with the noblest events of humanity. Genre system is not so much relevant for a contemporary art; there are just two genre definitions that are usually applied to it: abstract or figurative.
The "Artworks by Style" page reads :
A style of an artwork refers to its distinctive visual elements, techniques and methods. It usually corresponds with an art movement or a school (group) that its author is associated with.
## Dataset Structure
* "image" : image
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
### Source Data
Files taken from this archive, curated from the WikiArt website.
## Additional Information
Note:
* The WikiArt dataset can be used only for non-commercial research purpose.
* The images in the WikiArt dataset were obtained from URL.
* The authors are neither responsible for the content nor the meaning of these images.
By using the WikiArt dataset, you agree to obey the terms and conditions of URL.
### Contributions
'gigant' added this dataset to the hub. | [
"## Dataset Description\n\n- Homepage: URL",
"### Dataset Summary\n\nDataset containing 81,444 pieces of visual art from various artists, taken from URL,\nalong with class labels for each image :\n\n* \"artist\" : 129 artist classes, including a \"Unknown Artist\" class\n* \"genre\" : 11 genre classes, including a \"Unknown Genre\" class\n* \"style\" : 27 style classes\n\nOn URL, the description for the \"Artworks by Genre\" page reads :\n\nA genre system divides artworks according to depicted themes and objects. A classical hierarchy of genres was developed in European culture by the 17th century. It ranked genres in high – history painting and portrait, - and low – genre painting, landscape and still life. This hierarchy was based on the notion of man as the measure of all things. Landscape and still life were the lowest because they did not involve human subject matter. History was highest because it dealt with the noblest events of humanity. Genre system is not so much relevant for a contemporary art; there are just two genre definitions that are usually applied to it: abstract or figurative.\n\nThe \"Artworks by Style\" page reads :\n\nA style of an artwork refers to its distinctive visual elements, techniques and methods. It usually corresponds with an art movement or a school (group) that its author is associated with.",
"## Dataset Structure\n\n* \"image\" : image\n* \"artist\" : 129 artist classes, including a \"Unknown Artist\" class\n* \"genre\" : 11 genre classes, including a \"Unknown Genre\" class\n* \"style\" : 27 style classes",
"### Source Data\n\nFiles taken from this archive, curated from the WikiArt website.",
"## Additional Information\n\nNote:\n\n* The WikiArt dataset can be used only for non-commercial research purpose.\n* The images in the WikiArt dataset were obtained from URL.\n* The authors are neither responsible for the content nor the meaning of these images.\n\nBy using the WikiArt dataset, you agree to obey the terms and conditions of URL.",
"### Contributions\n\n'gigant' added this dataset to the hub."
] | [
"TAGS\n#task_categories-image-classification #task_categories-text-to-image #task_categories-image-to-text #size_categories-10K<n<100K #license-unknown #art #region-us \n",
"## Dataset Description\n\n- Homepage: URL",
"### Dataset Summary\n\nDataset containing 81,444 pieces of visual art from various artists, taken from URL,\nalong with class labels for each image :\n\n* \"artist\" : 129 artist classes, including a \"Unknown Artist\" class\n* \"genre\" : 11 genre classes, including a \"Unknown Genre\" class\n* \"style\" : 27 style classes\n\nOn URL, the description for the \"Artworks by Genre\" page reads :\n\nA genre system divides artworks according to depicted themes and objects. A classical hierarchy of genres was developed in European culture by the 17th century. It ranked genres in high – history painting and portrait, - and low – genre painting, landscape and still life. This hierarchy was based on the notion of man as the measure of all things. Landscape and still life were the lowest because they did not involve human subject matter. History was highest because it dealt with the noblest events of humanity. Genre system is not so much relevant for a contemporary art; there are just two genre definitions that are usually applied to it: abstract or figurative.\n\nThe \"Artworks by Style\" page reads :\n\nA style of an artwork refers to its distinctive visual elements, techniques and methods. It usually corresponds with an art movement or a school (group) that its author is associated with.",
"## Dataset Structure\n\n* \"image\" : image\n* \"artist\" : 129 artist classes, including a \"Unknown Artist\" class\n* \"genre\" : 11 genre classes, including a \"Unknown Genre\" class\n* \"style\" : 27 style classes",
"### Source Data\n\nFiles taken from this archive, curated from the WikiArt website.",
"## Additional Information\n\nNote:\n\n* The WikiArt dataset can be used only for non-commercial research purpose.\n* The images in the WikiArt dataset were obtained from URL.\n* The authors are neither responsible for the content nor the meaning of these images.\n\nBy using the WikiArt dataset, you agree to obey the terms and conditions of URL.",
"### Contributions\n\n'gigant' added this dataset to the hub."
] |
6aa6bccd5e72aac4a0e6d32b140564390a8a165a | - This is a personal convenience copy of the binary Hate Speech (HS) dataset used in the T-Miner paper on defending against trojan attacks on text classifiers: https://arxiv.org/pdf/2103.04264.pdf
- The dataset is sourced from the original paper\'s Github repository: https://github.com/reza321/T-Miner
- Label mapping:
- 0 = hate speech
- 1 = normal speech
- If you use this dataset please cite the T-Miner paper (see bibtex below), and the two original papers from which T-Miner constructed the dataset (see paper for references):
```@inproceedings{azizi21tminer,
title={T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification},
author={Azizi, Ahmadreza and Tahmid, Ibrahim and Waheed, Asim and Mangaokar, Neal amd Pu, Jiameng and Javed, Mobin and Reddy, Chandan K. and Viswanath, Bimal},
booktitle={Proc. of USENIX Security},
year={2021}}
``` | nealmgkr/tminer_hs | [
"arxiv:2103.04264",
"region:us"
] | 2022-04-06T08:41:13+00:00 | {} | 2022-04-06T08:45:48+00:00 | [
"2103.04264"
] | [] | TAGS
#arxiv-2103.04264 #region-us
| - This is a personal convenience copy of the binary Hate Speech (HS) dataset used in the T-Miner paper on defending against trojan attacks on text classifiers: URL
- The dataset is sourced from the original paper\'s Github repository: URL
- Label mapping:
- 0 = hate speech
- 1 = normal speech
- If you use this dataset please cite the T-Miner paper (see bibtex below), and the two original papers from which T-Miner constructed the dataset (see paper for references):
| [] | [
"TAGS\n#arxiv-2103.04264 #region-us \n"
] |
1cad77bdc16e9965ba15285d5fc9ca347d6cec3a |
# Dataset Card for MTet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://translate.vietai.org/
- **Repository:** https://github.com/vietai/mTet
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
The languages in the dataset are:
- Vietnamese (`vi`)
- English (`en`)
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'He said that existing restrictions would henceforth be legally enforceable, and violators would be fined.',
'vi': 'Ông nói những biện pháp hạn chế hiện tại sẽ được nâng lên thành quy định pháp luật, và những ai vi phạm sẽ chịu phạt.'
}
}
```
### Data Fields
- `translation`:
- `en`: Parallel text in English.
- `vi`: Parallel text in Vietnamese.
### Data Splits
The dataset is in a single "train" split.
| | train |
|--------------------|--------:|
| Number of examples | 4163853 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@article{mTet2022,
author = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
title = {MTet: Multi-domain Translation for English and Vietnamese},
journal = {https://github.com/vietai/mTet},
year = {2022},
}
```
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
| albertvillanova/mtet | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|bible_para",
"source_datasets:extended|kde4",
"source_datasets:extended|opus_gnome",
"source_datasets:extended|open_subtitles",
"source_datasets:extended|tatoeba",
"language:en",
"language:vi",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-04-06T09:25:42+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en", "vi"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original", "extended|bible_para", "extended|kde4", "extended|opus_gnome", "extended|open_subtitles", "extended|tatoeba"], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation"], "pretty_name": "MTet"} | 2022-10-08T06:42:34+00:00 | [] | [
"en",
"vi"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|bible_para #source_datasets-extended|kde4 #source_datasets-extended|opus_gnome #source_datasets-extended|open_subtitles #source_datasets-extended|tatoeba #language-English #language-Vietnamese #license-cc-by-nc-sa-4.0 #region-us
| Dataset Card for MTet
=====================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains.
### Supported Tasks and Leaderboards
* Machine Translation
### Languages
The languages in the dataset are:
* Vietnamese ('vi')
* English ('en')
Dataset Structure
-----------------
### Data Instances
### Data Fields
* 'translation':
+ 'en': Parallel text in English.
+ 'vi': Parallel text in Vietnamese.
### Data Splits
The dataset is in a single "train" split.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
### Contributions
Thanks to @albertvillanova for adding this dataset.
| [
"### Dataset Summary\n\n\nMTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of\ntexts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,\nliterature, news, and poems.\n\n\nThis dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality\nEnglish-Vietnamese sentence pairs on various domains.",
"### Supported Tasks and Leaderboards\n\n\n* Machine Translation",
"### Languages\n\n\nThe languages in the dataset are:\n\n\n* Vietnamese ('vi')\n* English ('en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'translation':\n\t+ 'en': Parallel text in English.\n\t+ 'vi': Parallel text in Vietnamese.",
"### Data Splits\n\n\nThe dataset is in a single \"train\" split.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).",
"### Contributions\n\n\nThanks to @albertvillanova for adding this dataset."
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|bible_para #source_datasets-extended|kde4 #source_datasets-extended|opus_gnome #source_datasets-extended|open_subtitles #source_datasets-extended|tatoeba #language-English #language-Vietnamese #license-cc-by-nc-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nMTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of\ntexts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,\nliterature, news, and poems.\n\n\nThis dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality\nEnglish-Vietnamese sentence pairs on various domains.",
"### Supported Tasks and Leaderboards\n\n\n* Machine Translation",
"### Languages\n\n\nThe languages in the dataset are:\n\n\n* Vietnamese ('vi')\n* English ('en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'translation':\n\t+ 'en': Parallel text in English.\n\t+ 'vi': Parallel text in Vietnamese.",
"### Data Splits\n\n\nThe dataset is in a single \"train\" split.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).",
"### Contributions\n\n\nThanks to @albertvillanova for adding this dataset."
] |
8d2332e07e64ae6adbc81586b8778785e2a80c29 |
# Dataset Card for french-open-fiscal-texts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://echanges.dila.gouv.fr/OPENDATA/JADE/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat".
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
fr-FR
## Dataset Structure
### Data Instances
```json
{
"file": "CETATEXT000007584427.xml",
"title": "Cour administrative d'appel de Marseille, 3�me chambre - formation � 3, du 21 octobre 2004, 00MA01080, in�dit au recueil Lebon",
"summary": "",
"content": "Vu la requête, enregistrée le 22 mai 2000, présentée pour M. Roger X, par Me Luherne, élisant domicile ...), et les mémoires complémentaires en date des 28 octobre 2002, 22 mars 2004 et 16 septembre 2004 ; M. X demande à la Cour :\n\n\n \n 11/ d'annuler le jugement n° 951520 en date du 16 mars 2000 par lequel le Tribunal administratif de Montpellier a rejeté sa requête tendant à la réduction des cotisations supplémentaires à l'impôt sur le revenu et des pénalités dont elles ont été assorties, auxquelles il a été assujetti au titre des années 1990, 1991 et 1992 ;\n\n\n \n 22/ de prononcer la réduction desdites cotisations ;\n\n\n \n 3°/ de condamner de l'Etat à lui verser une somme de 32.278 francs soit 4.920,75 euros"
}
```
### Data Fields
`file`: identifier on the JADE OPENDATA file
`title`: Name of the law case
`summary`: Summary provided by JADE (may be missing)
`content`: Text content of the case law
### Data Splits
train
test
## Dataset Creation
### Curation Rationale
This dataset is an attempt to gather multiple tax related french text law.
The first intent it to build model to summarize law cases
### Source Data
#### Initial Data Collection and Normalization
Collected from the https://echanges.dila.gouv.fr/OPENDATA/
- Filtering xml files containing "Code général des impôts" (tax related)
- Extracting content, summary, identifier, title
#### Who are the source language producers?
DILA
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | StanBienaives/french-open-fiscal-texts | [
"language:fr",
"region:us"
] | 2022-04-06T10:42:06+00:00 | {"language": ["fr"]} | 2024-01-15T10:05:08+00:00 | [] | [
"fr"
] | TAGS
#language-French #region-us
|
# Dataset Card for french-open-fiscal-texts
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat".
### Supported Tasks and Leaderboards
### Languages
fr-FR
## Dataset Structure
### Data Instances
### Data Fields
'file': identifier on the JADE OPENDATA file
'title': Name of the law case
'summary': Summary provided by JADE (may be missing)
'content': Text content of the case law
### Data Splits
train
test
## Dataset Creation
### Curation Rationale
This dataset is an attempt to gather multiple tax related french text law.
The first intent it to build model to summarize law cases
### Source Data
#### Initial Data Collection and Normalization
Collected from the URL
- Filtering xml files containing "Code général des impôts" (tax related)
- Extracting content, summary, identifier, title
#### Who are the source language producers?
DILA
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for french-open-fiscal-texts",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court \"Conseil d'Etat\".",
"### Supported Tasks and Leaderboards",
"### Languages\n\nfr-FR",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n'file': identifier on the JADE OPENDATA file\n\n'title': Name of the law case \n'summary': Summary provided by JADE (may be missing) \n'content': Text content of the case law",
"### Data Splits\n\ntrain \ntest",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset is an attempt to gather multiple tax related french text law. \nThe first intent it to build model to summarize law cases",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nCollected from the URL\n- Filtering xml files containing \"Code général des impôts\" (tax related)\n- Extracting content, summary, identifier, title",
"#### Who are the source language producers?\n\nDILA",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#language-French #region-us \n",
"# Dataset Card for french-open-fiscal-texts",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court \"Conseil d'Etat\".",
"### Supported Tasks and Leaderboards",
"### Languages\n\nfr-FR",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n'file': identifier on the JADE OPENDATA file\n\n'title': Name of the law case \n'summary': Summary provided by JADE (may be missing) \n'content': Text content of the case law",
"### Data Splits\n\ntrain \ntest",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset is an attempt to gather multiple tax related french text law. \nThe first intent it to build model to summarize law cases",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nCollected from the URL\n- Filtering xml files containing \"Code général des impôts\" (tax related)\n- Extracting content, summary, identifier, title",
"#### Who are the source language producers?\n\nDILA",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
1d8fa78d643f0207bfac31f2e42c056769e16fed |
## Common User Intentions
#### Greetings
- Wasemaje
- uko aje btw
- oyah...
- Form
- Alafu niaje
- Poa Sana Mambo
- Niko poa
- Pia Mimi Niko salama
- Hope siku yako iko poa
- Siko poa kabisa
- Nimekuwa poa
- Umeshindaje
- Hope uko poa
- uko poa
- Sasa
- Vipi vipi
- Niko salama
- ..its been long.
- Nko fiti
- niko fiti
- Nmeamka fity..
- Vipi
- Unasemaje
- Aaaah...itakuaje sasaa..
- .iz vipi..itakuaje..
- Form ni gani bro...
- iz vipi
#### Affirm
- Hapo sawa...
- Fty
- sai
- Hio si ni better hadi
- Imebidi.
- Eeeh mazee
- mazeee
- Fity fity
- Oooh poapoa
- Yap
- Inakaa poa
- Yeah itabidi
- Ooooh...
- Si ndo nadaaiii😅
- Oooh sawa
- Okay sawa basi
- Venye utaamua ni sawa
- Sawa wacha tungoje
- lazima
- apa umenena
- Sawa basi
- walai
- Oooh
- inaweza mbaya
- itaweza mbaya
- ni sawa
- Iko poa
- Iko tu sawa hivo
- ilinbamba.
- Nimemada
- Btw hao ata mimi naona
- but inaeleweka
- pia mimi
- iende ikiendaga
- We jua ivo
- Hata Mimi
- Nataka
- Ooh.
- Chezea tu hapo
- isorait
- Ata yako ni kali
- Ntaicheck out Leo
- hmm. Okay
- Mimi sina shida
- ooooh io iko fity...
- hii ni ngori
- maze
- sawa
- banaa
- Aaah kumbe
- Safiii..
- Sasawa
- hio ni fityyy
- Yeah nliona
- Vizii...
- Eeeeh nmekua naiona...
- Yea
- Haina nomA
- katambe
- accept basi
- ni sawa
- Issaplan
- nmeget
- nimedai tu
- eeh
- Hio ni poa
- nadai sa hii
- Eeeeh
- mi nadai tu
- firi
- Hapo freshi
#### Deny
- Sipendi
- aih
- Nimegive up
- Yangu bado
- siezi make
- Sina😊
- Haileti
- Haiwezi
- Io sikuwa nikwambie
- Sikuwa
- Wacha ata
- ata sijui
- Sijasema
- Sijai
- hiyo haiezi
- Bado.
- Uku tricks...
- sidai
- achana nayo
- ziii
- si fityy
- Nimekataa Mimi
- Sijui
- Aiwezekani
- Bado sioni
#### Courtesy
- Imefika... shukran
- Haina ngori
- Inafaa hivo
- Utakuwa umeniokolea manzee
- Karibu
- Nyc one
- Hakuna pressure
- Gai. Pole
- Usijali I will
- Nimekufeel hapo
- Waah izaa
- Pole lkn
- Pole
- plz
- okay...pole
- thanks for pulling up lkn..
- shukran
- Eeeeh nyc
- Thanx for the info
- Uko aje
- haina pressure
- eih, iko fiti.
- vitu kama hizo
- sahii
#### Asking clarification
- check alafu unishow
- Sasa msee akishabuy anafanya aje
- Umeenda wapi
- nlikuwa nadai
- Nlikua nataka
- Ulipata
- leo jioni utakuwa?
- uko
- umelostia wapi?
- ingine?
- hii inamaanisha?
- Wewe Sasa ni nani?
- warrathos
- kwani nisiende sasa
- unadai zingine?
- Kwani
- Haiya...
- Unadu?
- inakuanga mangapiii...
- Kuna nn
- Nauliza
- Hakuna kwanini
- Nadai kujua what
- Kwanini hakuna
- Kwa nini hakuna
- Uliniambia
- Mbona
- Nlikua nashangaa
- Unadu nini
- Oooh mara moja
- Unaeza taka?
- unaeza make?
- Umeipata?
- wapi kwingine tena
- kuna yenye natafuta
- Sijajua bado
- Niko na ingine
- ulikuwa unataka
- ulinishow?
- ulinsho
- Umepata
- Ata stage hakuna?
- Huku hakuna kibandaski?
- Sai ndio uko available
- Ivo
- Inaeza
- Naeza
- Btw, nikuulize
- Uliza
- hadi sa hii
- Nauliza ndio nijue kama bado iko
- Btw ile hoteli tulienda na wewe apo kiimbo huendangi?
#### Comedy
- Ata kama
- Wasikupee pressure
- umeanza jokes
- Ulisumbua sana
- Unaeza niambia ivo
- usinicheke
- Hakuna😁😁kwanini
- aki wewe.
- naskia mpaka ulipiga sherehe
- sio?
- uko na kakitu
- Aaaaii
- .uko fity nayo..
- icome through mbaya...
#### Small talk
- Kuchil tu bana
- Inafaa hivo
- Acha niskizie
- Skujua hii stuff
- nacheza chini
- hii imesink deep.
- mi Niko
- khai, gai, ghaiye
- Woiye
- ndo nmeland
- Nimekuona
- Kaaai
- Nambie
- bado nashangaa aliipull thru maze
- Niambie
- Najua uko kejani
- Bado uko
- Utakuwa sawa
- Niko poa ata kama uniliacha hanging jana
- issa deal
- Walai io nilijua utasema
- hujawai sahau hii
- Sijajua bado
- Ni maroundi tu
- Enyewe imetoka mbali
- Hadi nimekuwa Tao leo
- Ni mnoma mbaya
- Anyway mambo ni polepole
- Imagine
- Sina la kusema
- Sai
- Najua umeboeka
#### Resolute
- Nataka leo
- hayo ndo maisha Sasa
- vile itakuja maze
- Acha tu
- Waaah Leo haiwezi
- Ni sawa tu
- Imeisha
- Itabidi
- siendagi
- siezi kuangusha
- nachangamkia hii
- Weno ivi...
- Hii price iko poa...
#### implore
- but nimetry tena
- aminia tu
- Ebu try
- Alafu
- naona hufeel kuongea
- Watu hawaongei?
- Itabidi tu umesort
- Naona huna shughuli yangu
- tufanye pamoja
- khai, gai, ghaiye
- so kalunch
- ama?
- Sahii ni the best time
- Kwanza sahii
- hii weekend
- Kaanza next weekend ni fity
- this weekend
- Acha ntacheki
- izo sasa..
- Acha tuone
- So tunafikanga ivor morning mapemaa
- naona uko rada
- mapema kiasi
- nimchapie niskie...
- Naisaka walai
#### Bye
- Ama kesho
- Ngoja nta rudi baadaye
- nacheki tu rada ya kesho
- Nitakusort kesho morning
- Ni hivo nimekafunga
- nitakushow
- Nextweek ndio inaeza
- Ntakuchapia kama ntamake
- Freshi
#### Sample Bot Responses
- tulia tu hana mambo mob
- si you know how we do it
- Form ni gani
- Oooh nmekuget
- znaeza kupea stress
- Hues make leo
- nshow password
- Nmeichangamkia design ya ngori
- Oooh nmekuget...
- ilicome through
- Naisaka walai
- kesho ntakuchapia
- nichapie niskie
- Aaaah..😅
- Alafu ile story ya
- Ooooh ebu ntasaka
- Saa ngapi uko free..
- Ama unasema ya
- Safiii..naona uko rada
- Ilkulemea🤣
- Acha ntacheki
- imeharibia form..
- Nmeitafuta
- Ndio nimeget
- inaeza saidia mtu
- Email yako ni gani
- Wacha niangalie
- nangoja ulipe
- nimeshikika
- Sawa tuma email
- Kwani ulimwambia nini
- Najua ata most of the time
- mara most btw
- Unajua tu ni risky
- unadai tu niseme mi ni robot
- kwanini
- ndio usiulizwe
- Ukiangalia niambie
- Last time ukinipigia nilikuwa nimeenda kuoshwa
- ikishaenda kwa mganga hairudi
- Hata Mimi ni hayo mambo madogo madogo ndio imenieka.
- We jua nafikirianga mingi ni venye zingine huwa sisemi
- Na najua
- unarelax
- mm ata sko tensed
- sahii ata ni risky
- but ntakuchapia
- oooh waah..
- aaaah ata ww
- hii si fityy
- maze itabidi tudunde virtual
- tunadunda wapiiii..
- kwani sa mi ndo nafaa kumshow kila time coz this is not the first time namwambia🤦♀️
- Wacha hizo.
- Yeah niko hapa
- Niko
- Give me sometime.
- Maze...nmecheza ki mimi
- Uko busy
- Chill kiasi
- Wacha nikusort
- ntakushow
- looking for you hupatikani
- Mnaniogopa ama
- Wewe unapenda free
- Nakusort sai chill mazee
- Kiasi
- relax mkubwa
- Sahii uko sorted sindio
- Ni juu
- bringing the future to us
- hiyo ni form yangu daily
- Ata mimi sitaki ufala 😂
- Imagine
- Uko sawa
- Uko sawa ama unaitaji ingine
- ka unaeza
- utanichapia tu
- unasemaje lakini
- Niulize
- Uko na number
- Ukiboeka wewe nitext
- unadai sa hii ?
- skuwa nimeona
- Acha nicheki
- Ni Friday bana
- Niko chilled tu
- Unadai aje.
- Utanichapia basi
- Umenyamaza sana bana
- imekam through ama
- Nategea umalize ndo nikushow ile form
- Guidance tu kiasi
- Tutadiscuss pia stori
- Nakwelewa
- tujue niaje
- itaweza mbaya
- Kuna hopes za kulearn
| JeunesseAfricaine/sheng_nlu | [
"license:mit",
"region:us"
] | 2022-04-06T10:51:04+00:00 | {"license": "mit"} | 2022-04-06T12:03:27+00:00 | [] | [] | TAGS
#license-mit #region-us
|
## Common User Intentions
#### Greetings
- Wasemaje
- uko aje btw
- oyah...
- Form
- Alafu niaje
- Poa Sana Mambo
- Niko poa
- Pia Mimi Niko salama
- Hope siku yako iko poa
- Siko poa kabisa
- Nimekuwa poa
- Umeshindaje
- Hope uko poa
- uko poa
- Sasa
- Vipi vipi
- Niko salama
- ..its been long.
- Nko fiti
- niko fiti
- Nmeamka fity..
- Vipi
- Unasemaje
- Aaaah...itakuaje sasaa..
- .iz vipi..itakuaje..
- Form ni gani bro...
- iz vipi
#### Affirm
- Hapo sawa...
- Fty
- sai
- Hio si ni better hadi
- Imebidi.
- Eeeh mazee
- mazeee
- Fity fity
- Oooh poapoa
- Yap
- Inakaa poa
- Yeah itabidi
- Ooooh...
- Si ndo nadaaiii
- Oooh sawa
- Okay sawa basi
- Venye utaamua ni sawa
- Sawa wacha tungoje
- lazima
- apa umenena
- Sawa basi
- walai
- Oooh
- inaweza mbaya
- itaweza mbaya
- ni sawa
- Iko poa
- Iko tu sawa hivo
- ilinbamba.
- Nimemada
- Btw hao ata mimi naona
- but inaeleweka
- pia mimi
- iende ikiendaga
- We jua ivo
- Hata Mimi
- Nataka
- Ooh.
- Chezea tu hapo
- isorait
- Ata yako ni kali
- Ntaicheck out Leo
- hmm. Okay
- Mimi sina shida
- ooooh io iko fity...
- hii ni ngori
- maze
- sawa
- banaa
- Aaah kumbe
- Safiii..
- Sasawa
- hio ni fityyy
- Yeah nliona
- Vizii...
- Eeeeh nmekua naiona...
- Yea
- Haina nomA
- katambe
- accept basi
- ni sawa
- Issaplan
- nmeget
- nimedai tu
- eeh
- Hio ni poa
- nadai sa hii
- Eeeeh
- mi nadai tu
- firi
- Hapo freshi
#### Deny
- Sipendi
- aih
- Nimegive up
- Yangu bado
- siezi make
- Sina
- Haileti
- Haiwezi
- Io sikuwa nikwambie
- Sikuwa
- Wacha ata
- ata sijui
- Sijasema
- Sijai
- hiyo haiezi
- Bado.
- Uku tricks...
- sidai
- achana nayo
- ziii
- si fityy
- Nimekataa Mimi
- Sijui
- Aiwezekani
- Bado sioni
#### Courtesy
- Imefika... shukran
- Haina ngori
- Inafaa hivo
- Utakuwa umeniokolea manzee
- Karibu
- Nyc one
- Hakuna pressure
- Gai. Pole
- Usijali I will
- Nimekufeel hapo
- Waah izaa
- Pole lkn
- Pole
- plz
- okay...pole
- thanks for pulling up lkn..
- shukran
- Eeeeh nyc
- Thanx for the info
- Uko aje
- haina pressure
- eih, iko fiti.
- vitu kama hizo
- sahii
#### Asking clarification
- check alafu unishow
- Sasa msee akishabuy anafanya aje
- Umeenda wapi
- nlikuwa nadai
- Nlikua nataka
- Ulipata
- leo jioni utakuwa?
- uko
- umelostia wapi?
- ingine?
- hii inamaanisha?
- Wewe Sasa ni nani?
- warrathos
- kwani nisiende sasa
- unadai zingine?
- Kwani
- Haiya...
- Unadu?
- inakuanga mangapiii...
- Kuna nn
- Nauliza
- Hakuna kwanini
- Nadai kujua what
- Kwanini hakuna
- Kwa nini hakuna
- Uliniambia
- Mbona
- Nlikua nashangaa
- Unadu nini
- Oooh mara moja
- Unaeza taka?
- unaeza make?
- Umeipata?
- wapi kwingine tena
- kuna yenye natafuta
- Sijajua bado
- Niko na ingine
- ulikuwa unataka
- ulinishow?
- ulinsho
- Umepata
- Ata stage hakuna?
- Huku hakuna kibandaski?
- Sai ndio uko available
- Ivo
- Inaeza
- Naeza
- Btw, nikuulize
- Uliza
- hadi sa hii
- Nauliza ndio nijue kama bado iko
- Btw ile hoteli tulienda na wewe apo kiimbo huendangi?
#### Comedy
- Ata kama
- Wasikupee pressure
- umeanza jokes
- Ulisumbua sana
- Unaeza niambia ivo
- usinicheke
- Hakunakwanini
- aki wewe.
- naskia mpaka ulipiga sherehe
- sio?
- uko na kakitu
- Aaaaii
- .uko fity nayo..
- icome through mbaya...
#### Small talk
- Kuchil tu bana
- Inafaa hivo
- Acha niskizie
- Skujua hii stuff
- nacheza chini
- hii imesink deep.
- mi Niko
- khai, gai, ghaiye
- Woiye
- ndo nmeland
- Nimekuona
- Kaaai
- Nambie
- bado nashangaa aliipull thru maze
- Niambie
- Najua uko kejani
- Bado uko
- Utakuwa sawa
- Niko poa ata kama uniliacha hanging jana
- issa deal
- Walai io nilijua utasema
- hujawai sahau hii
- Sijajua bado
- Ni maroundi tu
- Enyewe imetoka mbali
- Hadi nimekuwa Tao leo
- Ni mnoma mbaya
- Anyway mambo ni polepole
- Imagine
- Sina la kusema
- Sai
- Najua umeboeka
#### Resolute
- Nataka leo
- hayo ndo maisha Sasa
- vile itakuja maze
- Acha tu
- Waaah Leo haiwezi
- Ni sawa tu
- Imeisha
- Itabidi
- siendagi
- siezi kuangusha
- nachangamkia hii
- Weno ivi...
- Hii price iko poa...
#### implore
- but nimetry tena
- aminia tu
- Ebu try
- Alafu
- naona hufeel kuongea
- Watu hawaongei?
- Itabidi tu umesort
- Naona huna shughuli yangu
- tufanye pamoja
- khai, gai, ghaiye
- so kalunch
- ama?
- Sahii ni the best time
- Kwanza sahii
- hii weekend
- Kaanza next weekend ni fity
- this weekend
- Acha ntacheki
- izo sasa..
- Acha tuone
- So tunafikanga ivor morning mapemaa
- naona uko rada
- mapema kiasi
- nimchapie niskie...
- Naisaka walai
#### Bye
- Ama kesho
- Ngoja nta rudi baadaye
- nacheki tu rada ya kesho
- Nitakusort kesho morning
- Ni hivo nimekafunga
- nitakushow
- Nextweek ndio inaeza
- Ntakuchapia kama ntamake
- Freshi
#### Sample Bot Responses
- tulia tu hana mambo mob
- si you know how we do it
- Form ni gani
- Oooh nmekuget
- znaeza kupea stress
- Hues make leo
- nshow password
- Nmeichangamkia design ya ngori
- Oooh nmekuget...
- ilicome through
- Naisaka walai
- kesho ntakuchapia
- nichapie niskie
- Aaaah..
- Alafu ile story ya
- Ooooh ebu ntasaka
- Saa ngapi uko free..
- Ama unasema ya
- Safiii..naona uko rada
- Ilkulemea
- Acha ntacheki
- imeharibia form..
- Nmeitafuta
- Ndio nimeget
- inaeza saidia mtu
- Email yako ni gani
- Wacha niangalie
- nangoja ulipe
- nimeshikika
- Sawa tuma email
- Kwani ulimwambia nini
- Najua ata most of the time
- mara most btw
- Unajua tu ni risky
- unadai tu niseme mi ni robot
- kwanini
- ndio usiulizwe
- Ukiangalia niambie
- Last time ukinipigia nilikuwa nimeenda kuoshwa
- ikishaenda kwa mganga hairudi
- Hata Mimi ni hayo mambo madogo madogo ndio imenieka.
- We jua nafikirianga mingi ni venye zingine huwa sisemi
- Na najua
- unarelax
- mm ata sko tensed
- sahii ata ni risky
- but ntakuchapia
- oooh waah..
- aaaah ata ww
- hii si fityy
- maze itabidi tudunde virtual
- tunadunda wapiiii..
- kwani sa mi ndo nafaa kumshow kila time coz this is not the first time namwambia️
- Wacha hizo.
- Yeah niko hapa
- Niko
- Give me sometime.
- Maze...nmecheza ki mimi
- Uko busy
- Chill kiasi
- Wacha nikusort
- ntakushow
- looking for you hupatikani
- Mnaniogopa ama
- Wewe unapenda free
- Nakusort sai chill mazee
- Kiasi
- relax mkubwa
- Sahii uko sorted sindio
- Ni juu
- bringing the future to us
- hiyo ni form yangu daily
- Ata mimi sitaki ufala
- Imagine
- Uko sawa
- Uko sawa ama unaitaji ingine
- ka unaeza
- utanichapia tu
- unasemaje lakini
- Niulize
- Uko na number
- Ukiboeka wewe nitext
- unadai sa hii ?
- skuwa nimeona
- Acha nicheki
- Ni Friday bana
- Niko chilled tu
- Unadai aje.
- Utanichapia basi
- Umenyamaza sana bana
- imekam through ama
- Nategea umalize ndo nikushow ile form
- Guidance tu kiasi
- Tutadiscuss pia stori
- Nakwelewa
- tujue niaje
- itaweza mbaya
- Kuna hopes za kulearn
| [
"## Common User Intentions",
"#### Greetings\r\n- Wasemaje\r\n- uko aje btw\r\n- oyah...\r\n- Form\r\n- Alafu niaje\r\n- Poa Sana Mambo\r\n- Niko poa\r\n- Pia Mimi Niko salama\r\n- Hope siku yako iko poa\r\n- Siko poa kabisa\r\n- Nimekuwa poa\r\n- Umeshindaje\r\n- Hope uko poa\r\n- uko poa\r\n- Sasa\r\n- Vipi vipi\r\n- Niko salama\r\n- ..its been long.\r\n- Nko fiti\r\n- niko fiti\r\n- Nmeamka fity..\r\n- Vipi\r\n- Unasemaje\r\n- Aaaah...itakuaje sasaa..\r\n - .iz vipi..itakuaje..\r\n - Form ni gani bro...\r\n - iz vipi",
"#### Affirm\r\n- Hapo sawa...\r\n- Fty\r\n- sai\r\n- Hio si ni better hadi\r\n- Imebidi.\r\n- Eeeh mazee\r\n- mazeee\r\n- Fity fity\r\n- Oooh poapoa\r\n- Yap \r\n- Inakaa poa\r\n- Yeah itabidi\r\n- Ooooh...\r\n- Si ndo nadaaiii\r\n- Oooh sawa\r\n- Okay sawa basi\r\n- Venye utaamua ni sawa\r\n- Sawa wacha tungoje\r\n- lazima\r\n- apa umenena\r\n- Sawa basi\r\n- walai\r\n- Oooh\r\n- inaweza mbaya\r\n- itaweza mbaya\r\n- ni sawa\r\n- Iko poa\r\n- Iko tu sawa hivo\r\n- ilinbamba.\r\n- Nimemada\r\n- Btw hao ata mimi naona\r\n- but inaeleweka\r\n- pia mimi\r\n- iende ikiendaga\r\n- We jua ivo\r\n- Hata Mimi\r\n- Nataka\r\n- Ooh.\r\n- Chezea tu hapo\r\n- isorait\r\n- Ata yako ni kali\r\n- Ntaicheck out Leo\r\n- hmm. Okay\r\n- Mimi sina shida\r\n- ooooh io iko fity...\r\n- hii ni ngori\r\n- maze\r\n- sawa\r\n- banaa\r\n- Aaah kumbe\r\n- Safiii..\r\n- Sasawa\r\n- hio ni fityyy\r\n- Yeah nliona\r\n- Vizii...\r\n- Eeeeh nmekua naiona...\r\n- Yea\r\n- Haina nomA\r\n- katambe\r\n- accept basi\r\n- ni sawa\r\n- Issaplan\r\n- nmeget\r\n- nimedai tu\r\n- eeh\r\n- Hio ni poa\r\n- nadai sa hii\r\n- Eeeeh\r\n- mi nadai tu \r\n- firi\r\n- Hapo freshi",
"#### Deny\r\n- Sipendi\r\n- aih\r\n- Nimegive up\r\n- Yangu bado\r\n- siezi make\r\n- Sina\r\n- Haileti\r\n- Haiwezi\r\n- Io sikuwa nikwambie\r\n- Sikuwa\r\n- Wacha ata\r\n- ata sijui\r\n- Sijasema\r\n- Sijai\r\n- hiyo haiezi\r\n- Bado.\r\n- Uku tricks...\r\n- sidai\r\n- achana nayo\r\n- ziii\r\n- si fityy\r\n- Nimekataa Mimi\r\n- Sijui\r\n- Aiwezekani\r\n- Bado sioni",
"#### Courtesy\r\n- Imefika... shukran\r\n- Haina ngori\r\n- Inafaa hivo\r\n- Utakuwa umeniokolea manzee\r\n- Karibu\r\n- Nyc one\r\n- Hakuna pressure\r\n- Gai. Pole\r\n- Usijali I will\r\n- Nimekufeel hapo\r\n- Waah izaa\r\n- Pole lkn\r\n- Pole\r\n- plz\r\n- okay...pole\r\n- thanks for pulling up lkn..\r\n- shukran\r\n- Eeeeh nyc\r\n- Thanx for the info\r\n- Uko aje\r\n- haina pressure\r\n- eih, iko fiti.\r\n- vitu kama hizo\r\n- sahii",
"#### Asking clarification\r\n- check alafu unishow\r\n- Sasa msee akishabuy anafanya aje\r\n- Umeenda wapi\r\n- nlikuwa nadai\r\n- Nlikua nataka\r\n- Ulipata\r\n- leo jioni utakuwa?\r\n- uko\r\n- umelostia wapi?\r\n- ingine?\r\n- hii inamaanisha?\r\n- Wewe Sasa ni nani?\r\n- warrathos\r\n- kwani nisiende sasa\r\n- unadai zingine?\r\n- Kwani\r\n- Haiya...\r\n- Unadu?\r\n- inakuanga mangapiii...\r\n- Kuna nn\r\n- Nauliza\r\n- Hakuna kwanini\r\n- Nadai kujua what\r\n- Kwanini hakuna\r\n- Kwa nini hakuna\r\n- Uliniambia\r\n- Mbona\r\n- Nlikua nashangaa\r\n- Unadu nini\r\n- Oooh mara moja\r\n- Unaeza taka?\r\n- unaeza make?\r\n- Umeipata?\r\n- wapi kwingine tena\r\n- kuna yenye natafuta\r\n- Sijajua bado\r\n- Niko na ingine\r\n- ulikuwa unataka\r\n- ulinishow?\r\n- ulinsho\r\n- Umepata\r\n- Ata stage hakuna?\r\n- Huku hakuna kibandaski?\r\n- Sai ndio uko available\r\n- Ivo\r\n- Inaeza\r\n- Naeza\r\n- Btw, nikuulize\r\n- Uliza\r\n- hadi sa hii\r\n- Nauliza ndio nijue kama bado iko\r\n- Btw ile hoteli tulienda na wewe apo kiimbo huendangi?",
"#### Comedy\r\n- Ata kama\r\n- Wasikupee pressure\r\n- umeanza jokes\r\n- Ulisumbua sana\r\n- Unaeza niambia ivo\r\n- usinicheke\r\n- Hakunakwanini\r\n- aki wewe.\r\n- naskia mpaka ulipiga sherehe\r\n- sio?\r\n- uko na kakitu\r\n- Aaaaii \r\n- .uko fity nayo..\r\n- icome through mbaya...",
"#### Small talk\r\n- Kuchil tu bana\r\n- Inafaa hivo\r\n- Acha niskizie\r\n- Skujua hii stuff\r\n- nacheza chini\r\n- hii imesink deep.\r\n- mi Niko\r\n- khai, gai, ghaiye\r\n- Woiye\r\n- ndo nmeland\r\n- Nimekuona\r\n- Kaaai\r\n- Nambie\r\n- bado nashangaa aliipull thru maze\r\n- Niambie\r\n- Najua uko kejani\r\n- Bado uko\r\n- Utakuwa sawa\r\n- Niko poa ata kama uniliacha hanging jana\r\n- issa deal\r\n- Walai io nilijua utasema\r\n- hujawai sahau hii\r\n- Sijajua bado\r\n- Ni maroundi tu\r\n- Enyewe imetoka mbali\r\n- Hadi nimekuwa Tao leo\r\n- Ni mnoma mbaya\r\n- Anyway mambo ni polepole\r\n- Imagine\r\n- Sina la kusema\r\n- Sai\r\n- Najua umeboeka",
"#### Resolute\r\n- Nataka leo\r\n- hayo ndo maisha Sasa\r\n- vile itakuja maze\r\n- Acha tu\r\n- Waaah Leo haiwezi\r\n- Ni sawa tu\r\n- Imeisha\r\n- Itabidi\r\n- siendagi\r\n- siezi kuangusha\r\n- nachangamkia hii\r\n- Weno ivi...\r\n- Hii price iko poa...",
"#### implore\r\n- but nimetry tena\r\n- aminia tu\r\n- Ebu try\r\n- Alafu\r\n- naona hufeel kuongea\r\n- Watu hawaongei?\r\n- Itabidi tu umesort\r\n- Naona huna shughuli yangu\r\n- tufanye pamoja\r\n- khai, gai, ghaiye\r\n- so kalunch\r\n- ama?\r\n- Sahii ni the best time\r\n- Kwanza sahii\r\n- hii weekend\r\n- Kaanza next weekend ni fity\r\n- this weekend\r\n- Acha ntacheki\r\n- izo sasa..\r\n- Acha tuone\r\n- So tunafikanga ivor morning mapemaa\r\n- naona uko rada\r\n- mapema kiasi\r\n- nimchapie niskie... \r\n- Naisaka walai",
"#### Bye\r\n- Ama kesho\r\n- Ngoja nta rudi baadaye\r\n- nacheki tu rada ya kesho\r\n- Nitakusort kesho morning\r\n- Ni hivo nimekafunga\r\n- nitakushow\r\n- Nextweek ndio inaeza\r\n- Ntakuchapia kama ntamake\r\n- Freshi",
"#### Sample Bot Responses\r\n- tulia tu hana mambo mob\r\n- si you know how we do it \r\n- Form ni gani\r\n- Oooh nmekuget\r\n- znaeza kupea stress\r\n- Hues make leo\r\n- nshow password\r\n- Nmeichangamkia design ya ngori\r\n- Oooh nmekuget...\r\n- ilicome through\r\n- Naisaka walai\r\n- kesho ntakuchapia\r\n- nichapie niskie\r\n- Aaaah..\r\n- Alafu ile story ya\r\n- Ooooh ebu ntasaka\r\n- Saa ngapi uko free..\r\n- Ama unasema ya\r\n- Safiii..naona uko rada\r\n- Ilkulemea\r\n- Acha ntacheki\r\n- imeharibia form..\r\n- Nmeitafuta\r\n- Ndio nimeget\r\n- inaeza saidia mtu\r\n- Email yako ni gani\r\n- Wacha niangalie\r\n- nangoja ulipe\r\n- nimeshikika\r\n- Sawa tuma email\r\n- Kwani ulimwambia nini\r\n- Najua ata most of the time\r\n- mara most btw\r\n- Unajua tu ni risky\r\n- unadai tu niseme mi ni robot\r\n- kwanini\r\n- ndio usiulizwe\r\n- Ukiangalia niambie\r\n- Last time ukinipigia nilikuwa nimeenda kuoshwa\r\n- ikishaenda kwa mganga hairudi\r\n- Hata Mimi ni hayo mambo madogo madogo ndio imenieka.\r\n- We jua nafikirianga mingi ni venye zingine huwa sisemi\r\n- Na najua\r\n- unarelax\r\n- mm ata sko tensed\r\n- sahii ata ni risky\r\n- but ntakuchapia\r\n- oooh waah..\r\n- aaaah ata ww\r\n- hii si fityy\r\n- maze itabidi tudunde virtual\r\n- tunadunda wapiiii..\r\n- kwani sa mi ndo nafaa kumshow kila time coz this is not the first time namwambia️\r\n- Wacha hizo.\r\n- Yeah niko hapa\r\n- Niko\r\n- Give me sometime.\r\n- Maze...nmecheza ki mimi\r\n- Uko busy\r\n- Chill kiasi\r\n- Wacha nikusort\r\n- ntakushow\r\n- looking for you hupatikani\r\n- Mnaniogopa ama\r\n- Wewe unapenda free\r\n- Nakusort sai chill mazee\r\n- Kiasi\r\n- relax mkubwa\r\n- Sahii uko sorted sindio\r\n- Ni juu\r\n- bringing the future to us\r\n- hiyo ni form yangu daily\r\n- Ata mimi sitaki ufala \r\n- Imagine\r\n- Uko sawa\r\n- Uko sawa ama unaitaji ingine\r\n- ka unaeza\r\n- utanichapia tu\r\n- unasemaje lakini\r\n- Niulize\r\n- Uko na number\r\n- Ukiboeka wewe nitext\r\n- unadai sa hii ?\r\n- skuwa nimeona\r\n- Acha nicheki\r\n- Ni Friday bana\r\n- Niko chilled tu\r\n- Unadai aje.\r\n- Utanichapia basi\r\n- Umenyamaza sana bana\r\n- imekam through ama\r\n- Nategea umalize ndo nikushow ile form\r\n- Guidance tu kiasi\r\n- Tutadiscuss pia stori\r\n- Nakwelewa\r\n- tujue niaje\r\n- itaweza mbaya\r\n- Kuna hopes za kulearn"
] | [
"TAGS\n#license-mit #region-us \n",
"## Common User Intentions",
"#### Greetings\r\n- Wasemaje\r\n- uko aje btw\r\n- oyah...\r\n- Form\r\n- Alafu niaje\r\n- Poa Sana Mambo\r\n- Niko poa\r\n- Pia Mimi Niko salama\r\n- Hope siku yako iko poa\r\n- Siko poa kabisa\r\n- Nimekuwa poa\r\n- Umeshindaje\r\n- Hope uko poa\r\n- uko poa\r\n- Sasa\r\n- Vipi vipi\r\n- Niko salama\r\n- ..its been long.\r\n- Nko fiti\r\n- niko fiti\r\n- Nmeamka fity..\r\n- Vipi\r\n- Unasemaje\r\n- Aaaah...itakuaje sasaa..\r\n - .iz vipi..itakuaje..\r\n - Form ni gani bro...\r\n - iz vipi",
"#### Affirm\r\n- Hapo sawa...\r\n- Fty\r\n- sai\r\n- Hio si ni better hadi\r\n- Imebidi.\r\n- Eeeh mazee\r\n- mazeee\r\n- Fity fity\r\n- Oooh poapoa\r\n- Yap \r\n- Inakaa poa\r\n- Yeah itabidi\r\n- Ooooh...\r\n- Si ndo nadaaiii\r\n- Oooh sawa\r\n- Okay sawa basi\r\n- Venye utaamua ni sawa\r\n- Sawa wacha tungoje\r\n- lazima\r\n- apa umenena\r\n- Sawa basi\r\n- walai\r\n- Oooh\r\n- inaweza mbaya\r\n- itaweza mbaya\r\n- ni sawa\r\n- Iko poa\r\n- Iko tu sawa hivo\r\n- ilinbamba.\r\n- Nimemada\r\n- Btw hao ata mimi naona\r\n- but inaeleweka\r\n- pia mimi\r\n- iende ikiendaga\r\n- We jua ivo\r\n- Hata Mimi\r\n- Nataka\r\n- Ooh.\r\n- Chezea tu hapo\r\n- isorait\r\n- Ata yako ni kali\r\n- Ntaicheck out Leo\r\n- hmm. Okay\r\n- Mimi sina shida\r\n- ooooh io iko fity...\r\n- hii ni ngori\r\n- maze\r\n- sawa\r\n- banaa\r\n- Aaah kumbe\r\n- Safiii..\r\n- Sasawa\r\n- hio ni fityyy\r\n- Yeah nliona\r\n- Vizii...\r\n- Eeeeh nmekua naiona...\r\n- Yea\r\n- Haina nomA\r\n- katambe\r\n- accept basi\r\n- ni sawa\r\n- Issaplan\r\n- nmeget\r\n- nimedai tu\r\n- eeh\r\n- Hio ni poa\r\n- nadai sa hii\r\n- Eeeeh\r\n- mi nadai tu \r\n- firi\r\n- Hapo freshi",
"#### Deny\r\n- Sipendi\r\n- aih\r\n- Nimegive up\r\n- Yangu bado\r\n- siezi make\r\n- Sina\r\n- Haileti\r\n- Haiwezi\r\n- Io sikuwa nikwambie\r\n- Sikuwa\r\n- Wacha ata\r\n- ata sijui\r\n- Sijasema\r\n- Sijai\r\n- hiyo haiezi\r\n- Bado.\r\n- Uku tricks...\r\n- sidai\r\n- achana nayo\r\n- ziii\r\n- si fityy\r\n- Nimekataa Mimi\r\n- Sijui\r\n- Aiwezekani\r\n- Bado sioni",
"#### Courtesy\r\n- Imefika... shukran\r\n- Haina ngori\r\n- Inafaa hivo\r\n- Utakuwa umeniokolea manzee\r\n- Karibu\r\n- Nyc one\r\n- Hakuna pressure\r\n- Gai. Pole\r\n- Usijali I will\r\n- Nimekufeel hapo\r\n- Waah izaa\r\n- Pole lkn\r\n- Pole\r\n- plz\r\n- okay...pole\r\n- thanks for pulling up lkn..\r\n- shukran\r\n- Eeeeh nyc\r\n- Thanx for the info\r\n- Uko aje\r\n- haina pressure\r\n- eih, iko fiti.\r\n- vitu kama hizo\r\n- sahii",
"#### Asking clarification\r\n- check alafu unishow\r\n- Sasa msee akishabuy anafanya aje\r\n- Umeenda wapi\r\n- nlikuwa nadai\r\n- Nlikua nataka\r\n- Ulipata\r\n- leo jioni utakuwa?\r\n- uko\r\n- umelostia wapi?\r\n- ingine?\r\n- hii inamaanisha?\r\n- Wewe Sasa ni nani?\r\n- warrathos\r\n- kwani nisiende sasa\r\n- unadai zingine?\r\n- Kwani\r\n- Haiya...\r\n- Unadu?\r\n- inakuanga mangapiii...\r\n- Kuna nn\r\n- Nauliza\r\n- Hakuna kwanini\r\n- Nadai kujua what\r\n- Kwanini hakuna\r\n- Kwa nini hakuna\r\n- Uliniambia\r\n- Mbona\r\n- Nlikua nashangaa\r\n- Unadu nini\r\n- Oooh mara moja\r\n- Unaeza taka?\r\n- unaeza make?\r\n- Umeipata?\r\n- wapi kwingine tena\r\n- kuna yenye natafuta\r\n- Sijajua bado\r\n- Niko na ingine\r\n- ulikuwa unataka\r\n- ulinishow?\r\n- ulinsho\r\n- Umepata\r\n- Ata stage hakuna?\r\n- Huku hakuna kibandaski?\r\n- Sai ndio uko available\r\n- Ivo\r\n- Inaeza\r\n- Naeza\r\n- Btw, nikuulize\r\n- Uliza\r\n- hadi sa hii\r\n- Nauliza ndio nijue kama bado iko\r\n- Btw ile hoteli tulienda na wewe apo kiimbo huendangi?",
"#### Comedy\r\n- Ata kama\r\n- Wasikupee pressure\r\n- umeanza jokes\r\n- Ulisumbua sana\r\n- Unaeza niambia ivo\r\n- usinicheke\r\n- Hakunakwanini\r\n- aki wewe.\r\n- naskia mpaka ulipiga sherehe\r\n- sio?\r\n- uko na kakitu\r\n- Aaaaii \r\n- .uko fity nayo..\r\n- icome through mbaya...",
"#### Small talk\r\n- Kuchil tu bana\r\n- Inafaa hivo\r\n- Acha niskizie\r\n- Skujua hii stuff\r\n- nacheza chini\r\n- hii imesink deep.\r\n- mi Niko\r\n- khai, gai, ghaiye\r\n- Woiye\r\n- ndo nmeland\r\n- Nimekuona\r\n- Kaaai\r\n- Nambie\r\n- bado nashangaa aliipull thru maze\r\n- Niambie\r\n- Najua uko kejani\r\n- Bado uko\r\n- Utakuwa sawa\r\n- Niko poa ata kama uniliacha hanging jana\r\n- issa deal\r\n- Walai io nilijua utasema\r\n- hujawai sahau hii\r\n- Sijajua bado\r\n- Ni maroundi tu\r\n- Enyewe imetoka mbali\r\n- Hadi nimekuwa Tao leo\r\n- Ni mnoma mbaya\r\n- Anyway mambo ni polepole\r\n- Imagine\r\n- Sina la kusema\r\n- Sai\r\n- Najua umeboeka",
"#### Resolute\r\n- Nataka leo\r\n- hayo ndo maisha Sasa\r\n- vile itakuja maze\r\n- Acha tu\r\n- Waaah Leo haiwezi\r\n- Ni sawa tu\r\n- Imeisha\r\n- Itabidi\r\n- siendagi\r\n- siezi kuangusha\r\n- nachangamkia hii\r\n- Weno ivi...\r\n- Hii price iko poa...",
"#### implore\r\n- but nimetry tena\r\n- aminia tu\r\n- Ebu try\r\n- Alafu\r\n- naona hufeel kuongea\r\n- Watu hawaongei?\r\n- Itabidi tu umesort\r\n- Naona huna shughuli yangu\r\n- tufanye pamoja\r\n- khai, gai, ghaiye\r\n- so kalunch\r\n- ama?\r\n- Sahii ni the best time\r\n- Kwanza sahii\r\n- hii weekend\r\n- Kaanza next weekend ni fity\r\n- this weekend\r\n- Acha ntacheki\r\n- izo sasa..\r\n- Acha tuone\r\n- So tunafikanga ivor morning mapemaa\r\n- naona uko rada\r\n- mapema kiasi\r\n- nimchapie niskie... \r\n- Naisaka walai",
"#### Bye\r\n- Ama kesho\r\n- Ngoja nta rudi baadaye\r\n- nacheki tu rada ya kesho\r\n- Nitakusort kesho morning\r\n- Ni hivo nimekafunga\r\n- nitakushow\r\n- Nextweek ndio inaeza\r\n- Ntakuchapia kama ntamake\r\n- Freshi",
"#### Sample Bot Responses\r\n- tulia tu hana mambo mob\r\n- si you know how we do it \r\n- Form ni gani\r\n- Oooh nmekuget\r\n- znaeza kupea stress\r\n- Hues make leo\r\n- nshow password\r\n- Nmeichangamkia design ya ngori\r\n- Oooh nmekuget...\r\n- ilicome through\r\n- Naisaka walai\r\n- kesho ntakuchapia\r\n- nichapie niskie\r\n- Aaaah..\r\n- Alafu ile story ya\r\n- Ooooh ebu ntasaka\r\n- Saa ngapi uko free..\r\n- Ama unasema ya\r\n- Safiii..naona uko rada\r\n- Ilkulemea\r\n- Acha ntacheki\r\n- imeharibia form..\r\n- Nmeitafuta\r\n- Ndio nimeget\r\n- inaeza saidia mtu\r\n- Email yako ni gani\r\n- Wacha niangalie\r\n- nangoja ulipe\r\n- nimeshikika\r\n- Sawa tuma email\r\n- Kwani ulimwambia nini\r\n- Najua ata most of the time\r\n- mara most btw\r\n- Unajua tu ni risky\r\n- unadai tu niseme mi ni robot\r\n- kwanini\r\n- ndio usiulizwe\r\n- Ukiangalia niambie\r\n- Last time ukinipigia nilikuwa nimeenda kuoshwa\r\n- ikishaenda kwa mganga hairudi\r\n- Hata Mimi ni hayo mambo madogo madogo ndio imenieka.\r\n- We jua nafikirianga mingi ni venye zingine huwa sisemi\r\n- Na najua\r\n- unarelax\r\n- mm ata sko tensed\r\n- sahii ata ni risky\r\n- but ntakuchapia\r\n- oooh waah..\r\n- aaaah ata ww\r\n- hii si fityy\r\n- maze itabidi tudunde virtual\r\n- tunadunda wapiiii..\r\n- kwani sa mi ndo nafaa kumshow kila time coz this is not the first time namwambia️\r\n- Wacha hizo.\r\n- Yeah niko hapa\r\n- Niko\r\n- Give me sometime.\r\n- Maze...nmecheza ki mimi\r\n- Uko busy\r\n- Chill kiasi\r\n- Wacha nikusort\r\n- ntakushow\r\n- looking for you hupatikani\r\n- Mnaniogopa ama\r\n- Wewe unapenda free\r\n- Nakusort sai chill mazee\r\n- Kiasi\r\n- relax mkubwa\r\n- Sahii uko sorted sindio\r\n- Ni juu\r\n- bringing the future to us\r\n- hiyo ni form yangu daily\r\n- Ata mimi sitaki ufala \r\n- Imagine\r\n- Uko sawa\r\n- Uko sawa ama unaitaji ingine\r\n- ka unaeza\r\n- utanichapia tu\r\n- unasemaje lakini\r\n- Niulize\r\n- Uko na number\r\n- Ukiboeka wewe nitext\r\n- unadai sa hii ?\r\n- skuwa nimeona\r\n- Acha nicheki\r\n- Ni Friday bana\r\n- Niko chilled tu\r\n- Unadai aje.\r\n- Utanichapia basi\r\n- Umenyamaza sana bana\r\n- imekam through ama\r\n- Nategea umalize ndo nikushow ile form\r\n- Guidance tu kiasi\r\n- Tutadiscuss pia stori\r\n- Nakwelewa\r\n- tujue niaje\r\n- itaweza mbaya\r\n- Kuna hopes za kulearn"
] |
70b2d68664a3c8e841f426cf8e43f4f669a75017 |
⚠️ This only a subpart of the original dataset, containing only `questionnaire`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley ([email protected]).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. | chainyo/rvl-cdip-questionnaire | [
"license:other",
"region:us"
] | 2022-04-06T15:34:49+00:00 | {"license": "other"} | 2022-04-06T15:45:26+00:00 | [] | [] | TAGS
#license-other #region-us
|
️ This only a subpart of the original dataset, containing only 'questionnaire'.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@URL).
The full dataset can be found here.
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
This dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015'
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL | [
"## Labels\n\n 0: letter \n 1: form \n 2: email \n 3: handwritten \n 4: advertissement \n 5: scientific report \n 6: scientific publication \n 7: specification \n 8: file folder \n 9: news article \n10: budget \n11: invoice \n12: presentation \n13: questionnaire \n14: resume \n15: memo \n\nThis dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, \"Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval,\" in ICDAR, 2015'",
"## License\n\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.",
"## References\n\n1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, \"Building a test collection for complex document information processing,\" in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006\n2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL"
] | [
"TAGS\n#license-other #region-us \n",
"## Labels\n\n 0: letter \n 1: form \n 2: email \n 3: handwritten \n 4: advertissement \n 5: scientific report \n 6: scientific publication \n 7: specification \n 8: file folder \n 9: news article \n10: budget \n11: invoice \n12: presentation \n13: questionnaire \n14: resume \n15: memo \n\nThis dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, \"Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval,\" in ICDAR, 2015'",
"## License\n\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.",
"## References\n\n1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, \"Building a test collection for complex document information processing,\" in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006\n2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL"
] |
fad615c9ceaecb4476b0a01f29c0a15b276b3a2b |
⚠️ This only a subpart of the original dataset, containing only `invoice`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley ([email protected]).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. | chainyo/rvl-cdip-invoice | [
"license:other",
"region:us"
] | 2022-04-06T15:52:14+00:00 | {"license": "other"} | 2022-04-06T15:57:20+00:00 | [] | [] | TAGS
#license-other #region-us
|
️ This only a subpart of the original dataset, containing only 'invoice'.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@URL).
The full dataset can be found here.
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
This dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015'
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL | [
"## Labels\n\n 0: letter \n 1: form \n 2: email \n 3: handwritten \n 4: advertissement \n 5: scientific report \n 6: scientific publication \n 7: specification \n 8: file folder \n 9: news article \n10: budget \n11: invoice \n12: presentation \n13: questionnaire \n14: resume \n15: memo \n\nThis dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, \"Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval,\" in ICDAR, 2015'",
"## License\n\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.",
"## References\n\n1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, \"Building a test collection for complex document information processing,\" in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006\n2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL"
] | [
"TAGS\n#license-other #region-us \n",
"## Labels\n\n 0: letter \n 1: form \n 2: email \n 3: handwritten \n 4: advertissement \n 5: scientific report \n 6: scientific publication \n 7: specification \n 8: file folder \n 9: news article \n10: budget \n11: invoice \n12: presentation \n13: questionnaire \n14: resume \n15: memo \n\nThis dataset is from this paper 'A. W. Harley, A. Ufkes, K. G. Derpanis, \"Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval,\" in ICDAR, 2015'",
"## License\n\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.",
"## References\n\n1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, \"Building a test collection for complex document information processing,\" in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006\n2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. URL"
] |
65bffe2a1449459207f82c5ed130487e74916cbf | # Dataset Card for Ukr-Synth
## Dataset Description
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
## Dataset Structure
### Data Splits
| name |train |validation|
|---------|-------:|---------:|
|conll2003|1000000| 10000|
## Dataset Creation
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
## Additional Information
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | ukr-models/Ukr-Synth | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:uk",
"license:mit",
"region:us"
] | 2022-04-06T16:13:34+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["uk"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "parsing", "part-of-speech"], "pretty_name": "Ukrainian synthetic dataset in conllu format"} | 2023-08-31T08:35:43+00:00 | [] | [
"uk"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-parsing #task_ids-part-of-speech #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-Ukrainian #license-mit #region-us
| Dataset Card for Ukr-Synth
==========================
Dataset Description
-------------------
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of Leipzig Corpora Collection for Ukrainian Language. The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
Dataset Structure
-----------------
### Data Splits
Dataset Creation
----------------
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
Additional Information
----------------------
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| [
"### Dataset Summary\n\n\nLarge silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.\nRepresents a subsample of Leipzig Corpora Collection for Ukrainian Language. The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.",
"### Languages\n\n\nUkrainian\n\n\nDataset Structure\n-----------------",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nLeipzig Corpora Collection:\n\n\nD. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.\nIn: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nMIT License\n\n\nCopyright (c) 2022\n\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-parsing #task_ids-part-of-speech #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-Ukrainian #license-mit #region-us \n",
"### Dataset Summary\n\n\nLarge silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.\nRepresents a subsample of Leipzig Corpora Collection for Ukrainian Language. The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.",
"### Languages\n\n\nUkrainian\n\n\nDataset Structure\n-----------------",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nLeipzig Corpora Collection:\n\n\nD. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.\nIn: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nMIT License\n\n\nCopyright (c) 2022\n\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
] |
3ae28881776a1a2f797fa1c2273f16136908c3ff |
# Dataset Card for BibleNLP Corpus
### Dataset Summary
Partial and complete Bible translations in 833 languages, aligned by verse.
### Languages
aai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc, aui, auy, avt, awb, awk, awx, azb, azg, azz, bao, bba, bbb, bbr, bch, bco, bdd, bea, bef, bel, ben, beo, beu, bgs, bgt, bhg, bhl, big, bjk, bjp, bjr, bjv, bjz, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bmu, bnp, boa, boj, bon, box, bpr, bps, bqc, bqp, bre, bsj, bsn, bsp, bss, buk, bus, bvd, bvr, bxh, byr, byx, bzd, bzh, bzj, caa, cab, cac, caf, cak, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ceb, cek, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, ckb, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, cpy, crn, crx, cso, csy, cta, cth, ctp, ctu, cub, cuc, cui, cuk, cut, cux, cwe, cya, daa, dad, dah, dan, ded, deu, dgc, dgr, dgz, dhg, dif, dik, dji, djk, djr, dob, dop, dov, dwr, dww, dwy, ebk, eko, emi, emp, eng, enq, epo, eri, ese, esk, etr, ewe, faa, fai, far, ffm, for, fra, fue, fuf, fuh, gah, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, guj, gul, gum, gun, guo, gup, gux, gvc, gvf, gvn, gvs, gwi, gym, gyr, hat, hau, haw, hbo, hch, heb, heg, hin, hix, hla, hlt, hmo, hns, hop, hot, hrv, hto, hub, hui, hun, hus, huu, huv, hvn, ian, ign, ikk, ikw, ilo, imo, inb, ind, ino, iou, ipi, isn, ita, iws, ixl, jac, jae, jao, jic, jid, jiv, jni, jpn, jvn, kan, kaq, kbc, kbh, kbm, kbq, kdc, kde, kdl, kek, ken, kew, kgf, kgk, kgp, khs, khz, kik, kiw, kiz, kje, kjn, kjs, kkc, kkl, klt, klv, kmg, kmh, kmk, kmo, kms, kmu, kne, knf, knj, knv, kos, kpf, kpg, kpj, kpr, kpw, kpx, kqa, kqc, kqf, kql, kqw, ksd, ksj, ksr, ktm, kto, kud, kue, kup, kvg, kvn, kwd, kwf, kwi, kwj, kyc, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, lbk, lcm, leu, lex, lgl, lid, lif, lin, lit, llg, lug, luo, lww, maa, maj, mal, mam, maq, mar, mau, mav, maz, mbb, mbc, mbh, mbj, mbl, mbs, mbt, mca, mcb, mcd, mcf, mco, mcp, mcq, mcr, mdy, med, mee, mek, meq, met, meu, mgc, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkj, mkl, mkn, mks, mle, mlh, mlp, mmo, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpt, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, mto, mux, muy, mva, mvn, mwc, mwe, mwf, mwp, mxb, mxp, mxq, mxt, mya, myk, myu, myw, myy, mzz, nab, naf, nak, nas, nay, nbq, nca, nch, ncj, ncl, ncu, ndg, ndj, nfa, ngp, ngu, nhe, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nii, nin, nko, nld, nlg, nmw, nna, nnq, noa, nop, not, nou, npi, npl, nsn, nss, ntj, ntp, ntu, nuy, nvm, nwi, nya, nys, nyu, obo, okv, omw, ong, ons, ood, opm, ory, ote, otm, otn, otq, ots, pab, pad, pah, pan, pao, pes, pib, pio, pir, piu, pjt, pls, plu, pma, poe, poh, poi, pol, pon, por, poy, ppo, prf, pri, ptp, ptu, pwg, qub, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, reg, rgu, rkb, rmc, rmy, ron, roo, rop, row, rro, ruf, rug, rus, rwo, sab, san, sbe, sbk, sbs, seh, sey, sgb, sgz, shj, shp, sim, sja, sll, smk, snc, snn, snp, snx, sny, som, soq, soy, spa, spl, spm, spp, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, taj, tam, tav, taw, tbc, tbf, tbg, tbl, tbo, tbz, tca, tcs, tcz, tdt, tee, tel, ter, tet, tew, tfr, tgk, tgl, tgo, tgp, tha, thd, tif, tim, tiw, tiy, tke, tku, tlf, tmd, tna, tnc, tnk, tnn, tnp, toc, tod, tof, toj, ton, too, top, tos, tpa, tpi, tpt, tpz, trc, tsw, ttc, tte, tuc, tue, tuf, tuo, tur, tvk, twi, txq, txu, tzj, tzo, ubr, ubu, udu, uig, ukr, uli, ulk, upv, ura, urb, urd, uri, urt, urw, usa, usp, uvh, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbi, wbp, wed, wer, wim, wiu, wiv, wmt, wmw, wnc, wnu, wol, wos, wrk, wro, wrs, wsk, wuv, xav, xbi, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yle, yml, yon, yor, yrb, yre, yss, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zap, zar, zas, zat, zav, zaw, zca, zga, zia, ziw, zlm, zos, zpc, zpl, zpm, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
## Dataset Structure
### Data Fields
**translation**
- **languages** - an N length list of the languages of the translations, sorted alphabetically
- **translation** - an N length list with the translations each corresponding to the language specified in the above field
**files**
- **lang** - an N length list of the languages of the files, in order of input
- **file** - an N length list of the filenames from the corpus on github, each corresponding with the lang above
**ref** - the verse(s) contained in the record, as a list, with each represented with: ``<a three letter book code> <chapter number>:<verse number>``
**licenses** - an N length list of licenses, corresponding to the list of files above
**copyrights** - information on copyright holders, corresponding to the list of files above
### Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as ``languages = ['eng', 'fra']``.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ``pair='single'``. If only the maximum range pairing is desired use ``pair='range'`` (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
## Sources
https://github.com/BibleNLP/ebible-corpus | bible-nlp/biblenlp-corpus | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aai",
"language:aak",
"language:aau",
"language:aaz",
"language:abt",
"language:abx",
"language:aby",
"language:acf",
"language:acr",
"language:acu",
"language:adz",
"language:aer",
"language:aey",
"language:agd",
"language:agg",
"language:agm",
"language:agn",
"language:agr",
"language:agt",
"language:agu",
"language:aia",
"language:aii",
"language:aka",
"language:ake",
"language:alp",
"language:alq",
"language:als",
"language:aly",
"language:ame",
"language:amf",
"language:amk",
"language:amm",
"language:amn",
"language:amo",
"language:amp",
"language:amr",
"language:amu",
"language:amx",
"language:anh",
"language:anv",
"language:aoi",
"language:aoj",
"language:aom",
"language:aon",
"language:apb",
"language:ape",
"language:apn",
"language:apr",
"language:apu",
"language:apw",
"language:apz",
"language:arb",
"language:are",
"language:arl",
"language:arn",
"language:arp",
"language:asm",
"language:aso",
"language:ata",
"language:atb",
"language:atd",
"language:atg",
"language:att",
"language:auc",
"language:aui",
"language:auy",
"language:avt",
"language:awb",
"language:awk",
"language:awx",
"language:azb",
"language:azg",
"language:azz",
"language:bao",
"language:bba",
"language:bbb",
"language:bbr",
"language:bch",
"language:bco",
"language:bdd",
"language:bea",
"language:bef",
"language:bel",
"language:ben",
"language:beo",
"language:beu",
"language:bgs",
"language:bgt",
"language:bhg",
"language:bhl",
"language:big",
"language:bjk",
"language:bjp",
"language:bjr",
"language:bjv",
"language:bjz",
"language:bkd",
"language:bki",
"language:bkq",
"language:bkx",
"language:bla",
"language:blw",
"language:blz",
"language:bmh",
"language:bmk",
"language:bmr",
"language:bmu",
"language:bnp",
"language:boa",
"language:boj",
"language:bon",
"language:box",
"language:bpr",
"language:bps",
"language:bqc",
"language:bqp",
"language:bre",
"language:bsj",
"language:bsn",
"language:bsp",
"language:bss",
"language:buk",
"language:bus",
"language:bvd",
"language:bvr",
"language:bxh",
"language:byr",
"language:byx",
"language:bzd",
"language:bzh",
"language:bzj",
"language:caa",
"language:cab",
"language:cac",
"language:caf",
"language:cak",
"language:cao",
"language:cap",
"language:car",
"language:cav",
"language:cax",
"language:cbc",
"language:cbi",
"language:cbk",
"language:cbr",
"language:cbs",
"language:cbt",
"language:cbu",
"language:cbv",
"language:cco",
"language:ceb",
"language:cek",
"language:ces",
"language:cgc",
"language:cha",
"language:chd",
"language:chf",
"language:chk",
"language:chq",
"language:chz",
"language:cjo",
"language:cjv",
"language:ckb",
"language:cle",
"language:clu",
"language:cme",
"language:cmn",
"language:cni",
"language:cnl",
"language:cnt",
"language:cof",
"language:con",
"language:cop",
"language:cot",
"language:cpa",
"language:cpb",
"language:cpc",
"language:cpu",
"language:cpy",
"language:crn",
"language:crx",
"language:cso",
"language:csy",
"language:cta",
"language:cth",
"language:ctp",
"language:ctu",
"language:cub",
"language:cuc",
"language:cui",
"language:cuk",
"language:cut",
"language:cux",
"language:cwe",
"language:cya",
"language:daa",
"language:dad",
"language:dah",
"language:dan",
"language:ded",
"language:deu",
"language:dgc",
"language:dgr",
"language:dgz",
"language:dhg",
"language:dif",
"language:dik",
"language:dji",
"language:djk",
"language:djr",
"language:dob",
"language:dop",
"language:dov",
"language:dwr",
"language:dww",
"language:dwy",
"language:ebk",
"language:eko",
"language:emi",
"language:emp",
"language:eng",
"language:enq",
"language:epo",
"language:eri",
"language:ese",
"language:esk",
"language:etr",
"language:ewe",
"language:faa",
"language:fai",
"language:far",
"language:ffm",
"language:for",
"language:fra",
"language:fue",
"language:fuf",
"language:fuh",
"language:gah",
"language:gai",
"language:gam",
"language:gaw",
"language:gdn",
"language:gdr",
"language:geb",
"language:gfk",
"language:ghs",
"language:glk",
"language:gmv",
"language:gng",
"language:gnn",
"language:gnw",
"language:gof",
"language:grc",
"language:gub",
"language:guh",
"language:gui",
"language:guj",
"language:gul",
"language:gum",
"language:gun",
"language:guo",
"language:gup",
"language:gux",
"language:gvc",
"language:gvf",
"language:gvn",
"language:gvs",
"language:gwi",
"language:gym",
"language:gyr",
"language:hat",
"language:hau",
"language:haw",
"language:hbo",
"language:hch",
"language:heb",
"language:heg",
"language:hin",
"language:hix",
"language:hla",
"language:hlt",
"language:hmo",
"language:hns",
"language:hop",
"language:hot",
"language:hrv",
"language:hto",
"language:hub",
"language:hui",
"language:hun",
"language:hus",
"language:huu",
"language:huv",
"language:hvn",
"language:ian",
"language:ign",
"language:ikk",
"language:ikw",
"language:ilo",
"language:imo",
"language:inb",
"language:ind",
"language:ino",
"language:iou",
"language:ipi",
"language:isn",
"language:ita",
"language:iws",
"language:ixl",
"language:jac",
"language:jae",
"language:jao",
"language:jic",
"language:jid",
"language:jiv",
"language:jni",
"language:jpn",
"language:jvn",
"language:kan",
"language:kaq",
"language:kbc",
"language:kbh",
"language:kbm",
"language:kbq",
"language:kdc",
"language:kde",
"language:kdl",
"language:kek",
"language:ken",
"language:kew",
"language:kgf",
"language:kgk",
"language:kgp",
"language:khs",
"language:khz",
"language:kik",
"language:kiw",
"language:kiz",
"language:kje",
"language:kjn",
"language:kjs",
"language:kkc",
"language:kkl",
"language:klt",
"language:klv",
"language:kmg",
"language:kmh",
"language:kmk",
"language:kmo",
"language:kms",
"language:kmu",
"language:kne",
"language:knf",
"language:knj",
"language:knv",
"language:kos",
"language:kpf",
"language:kpg",
"language:kpj",
"language:kpr",
"language:kpw",
"language:kpx",
"language:kqa",
"language:kqc",
"language:kqf",
"language:kql",
"language:kqw",
"language:ksd",
"language:ksj",
"language:ksr",
"language:ktm",
"language:kto",
"language:kud",
"language:kue",
"language:kup",
"language:kvg",
"language:kvn",
"language:kwd",
"language:kwf",
"language:kwi",
"language:kwj",
"language:kyc",
"language:kyf",
"language:kyg",
"language:kyq",
"language:kyz",
"language:kze",
"language:lac",
"language:lat",
"language:lbb",
"language:lbk",
"language:lcm",
"language:leu",
"language:lex",
"language:lgl",
"language:lid",
"language:lif",
"language:lin",
"language:lit",
"language:llg",
"language:lug",
"language:luo",
"language:lww",
"language:maa",
"language:maj",
"language:mal",
"language:mam",
"language:maq",
"language:mar",
"language:mau",
"language:mav",
"language:maz",
"language:mbb",
"language:mbc",
"language:mbh",
"language:mbj",
"language:mbl",
"language:mbs",
"language:mbt",
"language:mca",
"language:mcb",
"language:mcd",
"language:mcf",
"language:mco",
"language:mcp",
"language:mcq",
"language:mcr",
"language:mdy",
"language:med",
"language:mee",
"language:mek",
"language:meq",
"language:met",
"language:meu",
"language:mgc",
"language:mgh",
"language:mgw",
"language:mhl",
"language:mib",
"language:mic",
"language:mie",
"language:mig",
"language:mih",
"language:mil",
"language:mio",
"language:mir",
"language:mit",
"language:miz",
"language:mjc",
"language:mkj",
"language:mkl",
"language:mkn",
"language:mks",
"language:mle",
"language:mlh",
"language:mlp",
"language:mmo",
"language:mmx",
"language:mna",
"language:mop",
"language:mox",
"language:mph",
"language:mpj",
"language:mpm",
"language:mpp",
"language:mps",
"language:mpt",
"language:mpx",
"language:mqb",
"language:mqj",
"language:msb",
"language:msc",
"language:msk",
"language:msm",
"language:msy",
"language:mti",
"language:mto",
"language:mux",
"language:muy",
"language:mva",
"language:mvn",
"language:mwc",
"language:mwe",
"language:mwf",
"language:mwp",
"language:mxb",
"language:mxp",
"language:mxq",
"language:mxt",
"language:mya",
"language:myk",
"language:myu",
"language:myw",
"language:myy",
"language:mzz",
"language:nab",
"language:naf",
"language:nak",
"language:nas",
"language:nay",
"language:nbq",
"language:nca",
"language:nch",
"language:ncj",
"language:ncl",
"language:ncu",
"language:ndg",
"language:ndj",
"language:nfa",
"language:ngp",
"language:ngu",
"language:nhe",
"language:nhg",
"language:nhi",
"language:nho",
"language:nhr",
"language:nhu",
"language:nhw",
"language:nhy",
"language:nif",
"language:nii",
"language:nin",
"language:nko",
"language:nld",
"language:nlg",
"language:nmw",
"language:nna",
"language:nnq",
"language:noa",
"language:nop",
"language:not",
"language:nou",
"language:npi",
"language:npl",
"language:nsn",
"language:nss",
"language:ntj",
"language:ntp",
"language:ntu",
"language:nuy",
"language:nvm",
"language:nwi",
"language:nya",
"language:nys",
"language:nyu",
"language:obo",
"language:okv",
"language:omw",
"language:ong",
"language:ons",
"language:ood",
"language:opm",
"language:ory",
"language:ote",
"language:otm",
"language:otn",
"language:otq",
"language:ots",
"language:pab",
"language:pad",
"language:pah",
"language:pan",
"language:pao",
"language:pes",
"language:pib",
"language:pio",
"language:pir",
"language:piu",
"language:pjt",
"language:pls",
"language:plu",
"language:pma",
"language:poe",
"language:poh",
"language:poi",
"language:pol",
"language:pon",
"language:por",
"language:poy",
"language:ppo",
"language:prf",
"language:pri",
"language:ptp",
"language:ptu",
"language:pwg",
"language:qub",
"language:quc",
"language:quf",
"language:quh",
"language:qul",
"language:qup",
"language:qvc",
"language:qve",
"language:qvh",
"language:qvm",
"language:qvn",
"language:qvs",
"language:qvw",
"language:qvz",
"language:qwh",
"language:qxh",
"language:qxn",
"language:qxo",
"language:rai",
"language:reg",
"language:rgu",
"language:rkb",
"language:rmc",
"language:rmy",
"language:ron",
"language:roo",
"language:rop",
"language:row",
"language:rro",
"language:ruf",
"language:rug",
"language:rus",
"language:rwo",
"language:sab",
"language:san",
"language:sbe",
"language:sbk",
"language:sbs",
"language:seh",
"language:sey",
"language:sgb",
"language:sgz",
"language:shj",
"language:shp",
"language:sim",
"language:sja",
"language:sll",
"language:smk",
"language:snc",
"language:snn",
"language:snp",
"language:snx",
"language:sny",
"language:som",
"language:soq",
"language:soy",
"language:spa",
"language:spl",
"language:spm",
"language:spp",
"language:sps",
"language:spy",
"language:sri",
"language:srm",
"language:srn",
"language:srp",
"language:srq",
"language:ssd",
"language:ssg",
"language:ssx",
"language:stp",
"language:sua",
"language:sue",
"language:sus",
"language:suz",
"language:swe",
"language:swh",
"language:swp",
"language:sxb",
"language:tac",
"language:taj",
"language:tam",
"language:tav",
"language:taw",
"language:tbc",
"language:tbf",
"language:tbg",
"language:tbl",
"language:tbo",
"language:tbz",
"language:tca",
"language:tcs",
"language:tcz",
"language:tdt",
"language:tee",
"language:tel",
"language:ter",
"language:tet",
"language:tew",
"language:tfr",
"language:tgk",
"language:tgl",
"language:tgo",
"language:tgp",
"language:tha",
"language:thd",
"language:tif",
"language:tim",
"language:tiw",
"language:tiy",
"language:tke",
"language:tku",
"language:tlf",
"language:tmd",
"language:tna",
"language:tnc",
"language:tnk",
"language:tnn",
"language:tnp",
"language:toc",
"language:tod",
"language:tof",
"language:toj",
"language:ton",
"language:too",
"language:top",
"language:tos",
"language:tpa",
"language:tpi",
"language:tpt",
"language:tpz",
"language:trc",
"language:tsw",
"language:ttc",
"language:tte",
"language:tuc",
"language:tue",
"language:tuf",
"language:tuo",
"language:tur",
"language:tvk",
"language:twi",
"language:txq",
"language:txu",
"language:tzj",
"language:tzo",
"language:ubr",
"language:ubu",
"language:udu",
"language:uig",
"language:ukr",
"language:uli",
"language:ulk",
"language:upv",
"language:ura",
"language:urb",
"language:urd",
"language:uri",
"language:urt",
"language:urw",
"language:usa",
"language:usp",
"language:uvh",
"language:uvl",
"language:vid",
"language:vie",
"language:viv",
"language:vmy",
"language:waj",
"language:wal",
"language:wap",
"language:wat",
"language:wbi",
"language:wbp",
"language:wed",
"language:wer",
"language:wim",
"language:wiu",
"language:wiv",
"language:wmt",
"language:wmw",
"language:wnc",
"language:wnu",
"language:wol",
"language:wos",
"language:wrk",
"language:wro",
"language:wrs",
"language:wsk",
"language:wuv",
"language:xav",
"language:xbi",
"language:xed",
"language:xla",
"language:xnn",
"language:xon",
"language:xsi",
"language:xtd",
"language:xtm",
"language:yaa",
"language:yad",
"language:yal",
"language:yap",
"language:yaq",
"language:yby",
"language:ycn",
"language:yka",
"language:yle",
"language:yml",
"language:yon",
"language:yor",
"language:yrb",
"language:yre",
"language:yss",
"language:yuj",
"language:yut",
"language:yuw",
"language:yva",
"language:zaa",
"language:zab",
"language:zac",
"language:zad",
"language:zai",
"language:zaj",
"language:zam",
"language:zao",
"language:zap",
"language:zar",
"language:zas",
"language:zat",
"language:zav",
"language:zaw",
"language:zca",
"language:zga",
"language:zia",
"language:ziw",
"language:zlm",
"language:zos",
"language:zpc",
"language:zpl",
"language:zpm",
"language:zpo",
"language:zpq",
"language:zpu",
"language:zpv",
"language:zpz",
"language:zsr",
"language:ztq",
"language:zty",
"language:zyp",
"language:be",
"language:br",
"language:cs",
"language:ch",
"language:zh",
"language:de",
"language:en",
"language:eo",
"language:fr",
"language:ht",
"language:he",
"language:hr",
"language:id",
"language:it",
"language:ja",
"language:la",
"language:nl",
"language:ru",
"language:sa",
"language:so",
"language:es",
"language:sr",
"language:sv",
"language:to",
"language:uk",
"language:vi",
"license:cc-by-4.0",
"license:other",
"region:us"
] | 2022-04-07T02:04:02+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["aai", "aak", "aau", "aaz", "abt", "abx", "aby", "acf", "acr", "acu", "adz", "aer", "aey", "agd", "agg", "agm", "agn", "agr", "agt", "agu", "aia", "aii", "aka", "ake", "alp", "alq", "als", "aly", "ame", "amf", "amk", "amm", "amn", "amo", "amp", "amr", "amu", "amx", "anh", "anv", "aoi", "aoj", "aom", "aon", "apb", "ape", "apn", "apr", "apu", "apw", "apz", "arb", "are", "arl", "arn", "arp", "asm", "aso", "ata", "atb", "atd", "atg", "att", "auc", "aui", "auy", "avt", "awb", "awk", "awx", "azb", "azg", "azz", "bao", "bba", "bbb", "bbr", "bch", "bco", "bdd", "bea", "bef", "bel", "ben", "beo", "beu", "bgs", "bgt", "bhg", "bhl", "big", "bjk", "bjp", "bjr", "bjv", "bjz", "bkd", "bki", "bkq", "bkx", "bla", "blw", "blz", "bmh", "bmk", "bmr", "bmu", "bnp", "boa", "boj", "bon", "box", "bpr", "bps", "bqc", "bqp", "bre", "bsj", "bsn", "bsp", "bss", "buk", "bus", "bvd", "bvr", "bxh", "byr", "byx", "bzd", "bzh", "bzj", "caa", "cab", "cac", "caf", "cak", "cao", "cap", "car", "cav", "cax", "cbc", "cbi", "cbk", "cbr", "cbs", "cbt", "cbu", "cbv", "cco", "ceb", "cek", "ces", "cgc", "cha", "chd", "chf", "chk", "chq", "chz", "cjo", "cjv", "ckb", "cle", "clu", "cme", "cmn", "cni", "cnl", "cnt", "cof", "con", "cop", "cot", "cpa", "cpb", "cpc", "cpu", "cpy", "crn", "crx", "cso", "csy", "cta", "cth", "ctp", "ctu", "cub", "cuc", "cui", "cuk", "cut", "cux", "cwe", "cya", "daa", "dad", "dah", "dan", "ded", "deu", "dgc", "dgr", "dgz", "dhg", "dif", "dik", "dji", "djk", "djr", "dob", "dop", "dov", "dwr", "dww", "dwy", "ebk", "eko", "emi", "emp", "eng", "enq", "epo", "eri", "ese", "esk", "etr", "ewe", "faa", "fai", "far", "ffm", "for", "fra", "fue", "fuf", "fuh", "gah", "gai", "gam", "gaw", "gdn", "gdr", "geb", "gfk", "ghs", "glk", "gmv", "gng", "gnn", "gnw", "gof", "grc", "gub", "guh", "gui", "guj", "gul", "gum", "gun", "guo", "gup", "gux", "gvc", "gvf", "gvn", "gvs", "gwi", "gym", "gyr", "hat", "hau", "haw", "hbo", "hch", "heb", "heg", "hin", "hix", "hla", "hlt", "hmo", "hns", "hop", "hot", "hrv", "hto", "hub", "hui", "hun", "hus", "huu", "huv", "hvn", "ian", "ign", "ikk", "ikw", "ilo", "imo", "inb", "ind", "ino", "iou", "ipi", "isn", "ita", "iws", "ixl", "jac", "jae", "jao", "jic", "jid", "jiv", "jni", "jpn", "jvn", "kan", "kaq", "kbc", "kbh", "kbm", "kbq", "kdc", "kde", "kdl", "kek", "ken", "kew", "kgf", "kgk", "kgp", "khs", "khz", "kik", "kiw", "kiz", "kje", "kjn", "kjs", "kkc", "kkl", "klt", "klv", "kmg", "kmh", "kmk", "kmo", "kms", "kmu", "kne", "knf", "knj", "knv", "kos", "kpf", "kpg", "kpj", "kpr", "kpw", "kpx", "kqa", "kqc", "kqf", "kql", "kqw", "ksd", "ksj", "ksr", "ktm", "kto", "kud", "kue", "kup", "kvg", "kvn", "kwd", "kwf", "kwi", "kwj", "kyc", "kyf", "kyg", "kyq", "kyz", "kze", "lac", "lat", "lbb", "lbk", "lcm", "leu", "lex", "lgl", "lid", "lif", "lin", "lit", "llg", "lug", "luo", "lww", "maa", "maj", "mal", "mam", "maq", "mar", "mau", "mav", "maz", "mbb", "mbc", "mbh", "mbj", "mbl", "mbs", "mbt", "mca", "mcb", "mcd", "mcf", "mco", "mcp", "mcq", "mcr", "mdy", "med", "mee", "mek", "meq", "met", "meu", "mgc", "mgh", "mgw", "mhl", "mib", "mic", "mie", "mig", "mih", "mil", "mio", "mir", "mit", "miz", "mjc", "mkj", "mkl", "mkn", "mks", "mle", "mlh", "mlp", "mmo", "mmx", "mna", "mop", "mox", "mph", "mpj", "mpm", "mpp", "mps", "mpt", "mpx", "mqb", "mqj", "msb", "msc", "msk", "msm", "msy", "mti", "mto", "mux", "muy", "mva", "mvn", "mwc", "mwe", "mwf", "mwp", "mxb", "mxp", "mxq", "mxt", "mya", "myk", "myu", "myw", "myy", "mzz", "nab", "naf", "nak", "nas", "nay", "nbq", "nca", "nch", "ncj", "ncl", "ncu", "ndg", "ndj", "nfa", "ngp", "ngu", "nhe", "nhg", "nhi", "nho", "nhr", "nhu", "nhw", "nhy", "nif", "nii", "nin", "nko", "nld", "nlg", "nmw", "nna", "nnq", "noa", "nop", "not", "nou", "npi", "npl", "nsn", "nss", "ntj", "ntp", "ntu", "nuy", "nvm", "nwi", "nya", "nys", "nyu", "obo", "okv", "omw", "ong", "ons", "ood", "opm", "ory", "ote", "otm", "otn", "otq", "ots", "pab", "pad", "pah", "pan", "pao", "pes", "pib", "pio", "pir", "piu", "pjt", "pls", "plu", "pma", "poe", "poh", "poi", "pol", "pon", "por", "poy", "ppo", "prf", "pri", "ptp", "ptu", "pwg", "qub", "quc", "quf", "quh", "qul", "qup", "qvc", "qve", "qvh", "qvm", "qvn", "qvs", "qvw", "qvz", "qwh", "qxh", "qxn", "qxo", "rai", "reg", "rgu", "rkb", "rmc", "rmy", "ron", "roo", "rop", "row", "rro", "ruf", "rug", "rus", "rwo", "sab", "san", "sbe", "sbk", "sbs", "seh", "sey", "sgb", "sgz", "shj", "shp", "sim", "sja", "sll", "smk", "snc", "snn", "snp", "snx", "sny", "som", "soq", "soy", "spa", "spl", "spm", "spp", "sps", "spy", "sri", "srm", "srn", "srp", "srq", "ssd", "ssg", "ssx", "stp", "sua", "sue", "sus", "suz", "swe", "swh", "swp", "sxb", "tac", "taj", "tam", "tav", "taw", "tbc", "tbf", "tbg", "tbl", "tbo", "tbz", "tca", "tcs", "tcz", "tdt", "tee", "tel", "ter", "tet", "tew", "tfr", "tgk", "tgl", "tgo", "tgp", "tha", "thd", "tif", "tim", "tiw", "tiy", "tke", "tku", "tlf", "tmd", "tna", "tnc", "tnk", "tnn", "tnp", "toc", "tod", "tof", "toj", "ton", "too", "top", "tos", "tpa", "tpi", "tpt", "tpz", "trc", "tsw", "ttc", "tte", "tuc", "tue", "tuf", "tuo", "tur", "tvk", "twi", "txq", "txu", "tzj", "tzo", "ubr", "ubu", "udu", "uig", "ukr", "uli", "ulk", "upv", "ura", "urb", "urd", "uri", "urt", "urw", "usa", "usp", "uvh", "uvl", "vid", "vie", "viv", "vmy", "waj", "wal", "wap", "wat", "wbi", "wbp", "wed", "wer", "wim", "wiu", "wiv", "wmt", "wmw", "wnc", "wnu", "wol", "wos", "wrk", "wro", "wrs", "wsk", "wuv", "xav", "xbi", "xed", "xla", "xnn", "xon", "xsi", "xtd", "xtm", "yaa", "yad", "yal", "yap", "yaq", "yby", "ycn", "yka", "yle", "yml", "yon", "yor", "yrb", "yre", "yss", "yuj", "yut", "yuw", "yva", "zaa", "zab", "zac", "zad", "zai", "zaj", "zam", "zao", "zap", "zar", "zas", "zat", "zav", "zaw", "zca", "zga", "zia", "ziw", "zlm", "zos", "zpc", "zpl", "zpm", "zpo", "zpq", "zpu", "zpv", "zpz", "zsr", "ztq", "zty", "zyp", "be", "br", "cs", "ch", "zh", "de", "en", "eo", "fr", "ht", "he", "hr", "id", "it", "ja", "la", "nl", "ru", "sa", "so", "es", "sr", "sv", "to", "uk", "vi"], "license": ["cc-by-4.0", "other"], "multilinguality": ["translation", "multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "biblenlp-corpus"} | 2023-07-21T10:56:30+00:00 | [] | [
"aai",
"aak",
"aau",
"aaz",
"abt",
"abx",
"aby",
"acf",
"acr",
"acu",
"adz",
"aer",
"aey",
"agd",
"agg",
"agm",
"agn",
"agr",
"agt",
"agu",
"aia",
"aii",
"aka",
"ake",
"alp",
"alq",
"als",
"aly",
"ame",
"amf",
"amk",
"amm",
"amn",
"amo",
"amp",
"amr",
"amu",
"amx",
"anh",
"anv",
"aoi",
"aoj",
"aom",
"aon",
"apb",
"ape",
"apn",
"apr",
"apu",
"apw",
"apz",
"arb",
"are",
"arl",
"arn",
"arp",
"asm",
"aso",
"ata",
"atb",
"atd",
"atg",
"att",
"auc",
"aui",
"auy",
"avt",
"awb",
"awk",
"awx",
"azb",
"azg",
"azz",
"bao",
"bba",
"bbb",
"bbr",
"bch",
"bco",
"bdd",
"bea",
"bef",
"bel",
"ben",
"beo",
"beu",
"bgs",
"bgt",
"bhg",
"bhl",
"big",
"bjk",
"bjp",
"bjr",
"bjv",
"bjz",
"bkd",
"bki",
"bkq",
"bkx",
"bla",
"blw",
"blz",
"bmh",
"bmk",
"bmr",
"bmu",
"bnp",
"boa",
"boj",
"bon",
"box",
"bpr",
"bps",
"bqc",
"bqp",
"bre",
"bsj",
"bsn",
"bsp",
"bss",
"buk",
"bus",
"bvd",
"bvr",
"bxh",
"byr",
"byx",
"bzd",
"bzh",
"bzj",
"caa",
"cab",
"cac",
"caf",
"cak",
"cao",
"cap",
"car",
"cav",
"cax",
"cbc",
"cbi",
"cbk",
"cbr",
"cbs",
"cbt",
"cbu",
"cbv",
"cco",
"ceb",
"cek",
"ces",
"cgc",
"cha",
"chd",
"chf",
"chk",
"chq",
"chz",
"cjo",
"cjv",
"ckb",
"cle",
"clu",
"cme",
"cmn",
"cni",
"cnl",
"cnt",
"cof",
"con",
"cop",
"cot",
"cpa",
"cpb",
"cpc",
"cpu",
"cpy",
"crn",
"crx",
"cso",
"csy",
"cta",
"cth",
"ctp",
"ctu",
"cub",
"cuc",
"cui",
"cuk",
"cut",
"cux",
"cwe",
"cya",
"daa",
"dad",
"dah",
"dan",
"ded",
"deu",
"dgc",
"dgr",
"dgz",
"dhg",
"dif",
"dik",
"dji",
"djk",
"djr",
"dob",
"dop",
"dov",
"dwr",
"dww",
"dwy",
"ebk",
"eko",
"emi",
"emp",
"eng",
"enq",
"epo",
"eri",
"ese",
"esk",
"etr",
"ewe",
"faa",
"fai",
"far",
"ffm",
"for",
"fra",
"fue",
"fuf",
"fuh",
"gah",
"gai",
"gam",
"gaw",
"gdn",
"gdr",
"geb",
"gfk",
"ghs",
"glk",
"gmv",
"gng",
"gnn",
"gnw",
"gof",
"grc",
"gub",
"guh",
"gui",
"guj",
"gul",
"gum",
"gun",
"guo",
"gup",
"gux",
"gvc",
"gvf",
"gvn",
"gvs",
"gwi",
"gym",
"gyr",
"hat",
"hau",
"haw",
"hbo",
"hch",
"heb",
"heg",
"hin",
"hix",
"hla",
"hlt",
"hmo",
"hns",
"hop",
"hot",
"hrv",
"hto",
"hub",
"hui",
"hun",
"hus",
"huu",
"huv",
"hvn",
"ian",
"ign",
"ikk",
"ikw",
"ilo",
"imo",
"inb",
"ind",
"ino",
"iou",
"ipi",
"isn",
"ita",
"iws",
"ixl",
"jac",
"jae",
"jao",
"jic",
"jid",
"jiv",
"jni",
"jpn",
"jvn",
"kan",
"kaq",
"kbc",
"kbh",
"kbm",
"kbq",
"kdc",
"kde",
"kdl",
"kek",
"ken",
"kew",
"kgf",
"kgk",
"kgp",
"khs",
"khz",
"kik",
"kiw",
"kiz",
"kje",
"kjn",
"kjs",
"kkc",
"kkl",
"klt",
"klv",
"kmg",
"kmh",
"kmk",
"kmo",
"kms",
"kmu",
"kne",
"knf",
"knj",
"knv",
"kos",
"kpf",
"kpg",
"kpj",
"kpr",
"kpw",
"kpx",
"kqa",
"kqc",
"kqf",
"kql",
"kqw",
"ksd",
"ksj",
"ksr",
"ktm",
"kto",
"kud",
"kue",
"kup",
"kvg",
"kvn",
"kwd",
"kwf",
"kwi",
"kwj",
"kyc",
"kyf",
"kyg",
"kyq",
"kyz",
"kze",
"lac",
"lat",
"lbb",
"lbk",
"lcm",
"leu",
"lex",
"lgl",
"lid",
"lif",
"lin",
"lit",
"llg",
"lug",
"luo",
"lww",
"maa",
"maj",
"mal",
"mam",
"maq",
"mar",
"mau",
"mav",
"maz",
"mbb",
"mbc",
"mbh",
"mbj",
"mbl",
"mbs",
"mbt",
"mca",
"mcb",
"mcd",
"mcf",
"mco",
"mcp",
"mcq",
"mcr",
"mdy",
"med",
"mee",
"mek",
"meq",
"met",
"meu",
"mgc",
"mgh",
"mgw",
"mhl",
"mib",
"mic",
"mie",
"mig",
"mih",
"mil",
"mio",
"mir",
"mit",
"miz",
"mjc",
"mkj",
"mkl",
"mkn",
"mks",
"mle",
"mlh",
"mlp",
"mmo",
"mmx",
"mna",
"mop",
"mox",
"mph",
"mpj",
"mpm",
"mpp",
"mps",
"mpt",
"mpx",
"mqb",
"mqj",
"msb",
"msc",
"msk",
"msm",
"msy",
"mti",
"mto",
"mux",
"muy",
"mva",
"mvn",
"mwc",
"mwe",
"mwf",
"mwp",
"mxb",
"mxp",
"mxq",
"mxt",
"mya",
"myk",
"myu",
"myw",
"myy",
"mzz",
"nab",
"naf",
"nak",
"nas",
"nay",
"nbq",
"nca",
"nch",
"ncj",
"ncl",
"ncu",
"ndg",
"ndj",
"nfa",
"ngp",
"ngu",
"nhe",
"nhg",
"nhi",
"nho",
"nhr",
"nhu",
"nhw",
"nhy",
"nif",
"nii",
"nin",
"nko",
"nld",
"nlg",
"nmw",
"nna",
"nnq",
"noa",
"nop",
"not",
"nou",
"npi",
"npl",
"nsn",
"nss",
"ntj",
"ntp",
"ntu",
"nuy",
"nvm",
"nwi",
"nya",
"nys",
"nyu",
"obo",
"okv",
"omw",
"ong",
"ons",
"ood",
"opm",
"ory",
"ote",
"otm",
"otn",
"otq",
"ots",
"pab",
"pad",
"pah",
"pan",
"pao",
"pes",
"pib",
"pio",
"pir",
"piu",
"pjt",
"pls",
"plu",
"pma",
"poe",
"poh",
"poi",
"pol",
"pon",
"por",
"poy",
"ppo",
"prf",
"pri",
"ptp",
"ptu",
"pwg",
"qub",
"quc",
"quf",
"quh",
"qul",
"qup",
"qvc",
"qve",
"qvh",
"qvm",
"qvn",
"qvs",
"qvw",
"qvz",
"qwh",
"qxh",
"qxn",
"qxo",
"rai",
"reg",
"rgu",
"rkb",
"rmc",
"rmy",
"ron",
"roo",
"rop",
"row",
"rro",
"ruf",
"rug",
"rus",
"rwo",
"sab",
"san",
"sbe",
"sbk",
"sbs",
"seh",
"sey",
"sgb",
"sgz",
"shj",
"shp",
"sim",
"sja",
"sll",
"smk",
"snc",
"snn",
"snp",
"snx",
"sny",
"som",
"soq",
"soy",
"spa",
"spl",
"spm",
"spp",
"sps",
"spy",
"sri",
"srm",
"srn",
"srp",
"srq",
"ssd",
"ssg",
"ssx",
"stp",
"sua",
"sue",
"sus",
"suz",
"swe",
"swh",
"swp",
"sxb",
"tac",
"taj",
"tam",
"tav",
"taw",
"tbc",
"tbf",
"tbg",
"tbl",
"tbo",
"tbz",
"tca",
"tcs",
"tcz",
"tdt",
"tee",
"tel",
"ter",
"tet",
"tew",
"tfr",
"tgk",
"tgl",
"tgo",
"tgp",
"tha",
"thd",
"tif",
"tim",
"tiw",
"tiy",
"tke",
"tku",
"tlf",
"tmd",
"tna",
"tnc",
"tnk",
"tnn",
"tnp",
"toc",
"tod",
"tof",
"toj",
"ton",
"too",
"top",
"tos",
"tpa",
"tpi",
"tpt",
"tpz",
"trc",
"tsw",
"ttc",
"tte",
"tuc",
"tue",
"tuf",
"tuo",
"tur",
"tvk",
"twi",
"txq",
"txu",
"tzj",
"tzo",
"ubr",
"ubu",
"udu",
"uig",
"ukr",
"uli",
"ulk",
"upv",
"ura",
"urb",
"urd",
"uri",
"urt",
"urw",
"usa",
"usp",
"uvh",
"uvl",
"vid",
"vie",
"viv",
"vmy",
"waj",
"wal",
"wap",
"wat",
"wbi",
"wbp",
"wed",
"wer",
"wim",
"wiu",
"wiv",
"wmt",
"wmw",
"wnc",
"wnu",
"wol",
"wos",
"wrk",
"wro",
"wrs",
"wsk",
"wuv",
"xav",
"xbi",
"xed",
"xla",
"xnn",
"xon",
"xsi",
"xtd",
"xtm",
"yaa",
"yad",
"yal",
"yap",
"yaq",
"yby",
"ycn",
"yka",
"yle",
"yml",
"yon",
"yor",
"yrb",
"yre",
"yss",
"yuj",
"yut",
"yuw",
"yva",
"zaa",
"zab",
"zac",
"zad",
"zai",
"zaj",
"zam",
"zao",
"zap",
"zar",
"zas",
"zat",
"zav",
"zaw",
"zca",
"zga",
"zia",
"ziw",
"zlm",
"zos",
"zpc",
"zpl",
"zpm",
"zpo",
"zpq",
"zpu",
"zpv",
"zpz",
"zsr",
"ztq",
"zty",
"zyp",
"be",
"br",
"cs",
"ch",
"zh",
"de",
"en",
"eo",
"fr",
"ht",
"he",
"hr",
"id",
"it",
"ja",
"la",
"nl",
"ru",
"sa",
"so",
"es",
"sr",
"sv",
"to",
"uk",
"vi"
] | TAGS
#task_categories-translation #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-translation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Arifama-Miniafia #language-Ankave #language-Abau #language-Amarasi #language-Ambulas #language-Inabaknon #language-Aneme Wake #language-Saint Lucian Creole French #language-Achi #language-Achuar-Shiwiar #language-Adzera #language-Eastern Arrernte #language-Amele #language-Agarabi #language-Angor #language-Angaataha #language-Agutaynen #language-Aguaruna #language-Central Cagayan Agta #language-Aguacateco #language-Arosi #language-Assyrian Neo-Aramaic #language-Akan #language-Akawaio #language-Alune #language-Algonquin #language-Tosk Albanian #language-Alyawarr #language-Yanesha' #language-Hamer-Banna #language-Ambai #language-Ama (Papua New Guinea) #language-Amanab #language-Amo #language-Alamblak #language-Amarakaeri #language-Guerrero Amuzgo #language-Anmatyerre #language-Nend #language-Denya #language-Anindilyakwa #language-Mufian #language-Ömie #language-Bumbita Arapesh #language-Sa'a #language-Bukiyip #language-Apinayé #language-Arop-Lokep #language-Apurinã #language-Western Apache #language-Safeyoka #language-Standard Arabic #language-Western Arrarnta #language-Arabela #language-Mapudungun #language-Arapaho #language-Assamese #language-Dano #language-Pele-Ata #language-Zaiwa #language-Ata Manobo #language-Ivbie North-Okpela-Arhe #language-Pamplona Atta #language-Waorani #language-Anuki #language-Awiyaana #language-Au #language-Awa (Papua New Guinea) #language-Awabakal #language-Awara #language-South Azerbaijani #language-San Pedro Amuzgos Amuzgo #language-Highland Puebla Nahuatl #language-Waimaha #language-Baatonum #language-Barai #language-Girawa #language-Bariai #language-Kaluli #language-Bunama #language-Beaver #language-Benabena #language-Belarusian #language-Bengali #language-Beami #language-Blagar #language-Tagabawa #language-Bughotu #language-Binandere #language-Bimin #language-Biangai #language-Barok #language-Fanamaket #language-Binumarien #language-Bedjond #language-Baruga #language-Binukid #language-Baki #language-Bakairí #language-Baikeno #language-Siksika #language-Balangao #language-Balantak #language-Kein #language-Ghayavi #language-Muinane #language-Somba-Siawari #language-Bola #language-Bora #language-Anjam #language-Bine #language-Buamu #language-Koronadal Blaan #language-Sarangani Blaan #language-Boko (Benin) #language-Busa #language-Breton #language-Bangwinji #language-Barasana-Eduria #language-Baga Sitemu #language-Akoose #language-Bugawac #language-Bokobaru #language-Baeggu #language-Burarra #language-Buhutu #language-Baruya #language-Qaqet #language-Bribri #language-Mapos Buang #language-Belize Kriol English #language-Chortí #language-Garifuna #language-Chuj #language-Southern Carrier #language-Kaqchikel #language-Chácobo #language-Chipaya #language-Galibi Carib #language-Cavineña #language-Chiquitano #language-Carapana #language-Chachi #language-Chavacano #language-Cashibo-Cacataibo #language-Cashinahua #language-Chayahuita #language-Candoshi-Shapra #language-Cacua #language-Comaltepec Chinantec #language-Cebuano #language-Eastern Khumi Chin #language-Czech #language-Kagayanen #language-Chamorro #language-Highland Oaxaca Chontal #language-Tabasco Chontal #language-Chuukese #language-Quiotepec Chinantec #language-Ozumacín Chinantec #language-Ashéninka Pajonal #language-Chuave #language-Central Kurdish #language-Lealao Chinantec #language-Caluyanun #language-Cerma #language-Mandarin Chinese #language-Asháninka #language-Lalana Chinantec #language-Tepetotutla Chinantec #language-Colorado #language-Cofán #language-Coptic #language-Caquinte #language-Palantla Chinantec #language-Ucayali-Yurúa Ashéninka #language-Ajyíninka Apurucayali #language-Pichis Ashéninka #language-South Ucayali Ashéninka #language-El Nayar Cora #language-Carrier #language-Sochiapam Chinantec #language-Siyin Chin #language-Tataltepec Chatino #language-Thaiphum Chin #language-Western Highland Chatino #language-Chol #language-Cubeo #language-Usila Chinantec #language-Cuiba #language-San Blas Kuna #language-Teutila Cuicatec #language-Tepeuxila Cuicatec #language-Kwere #language-Nopala Chatino #language-Dangaléat #language-Marik #language-Gwahatike #language-Danish #language-Dedua #language-German #language-Casiguran Dumagat Agta #language-Dogrib #language-Daga #language-Dhangu-Djangu #language-Dieri #language-Southwestern Dinka #language-Djinang #language-Eastern Maroon Creole #language-Djambarrpuyngu #language-Dobu #language-Lukpa #language-Dombe #language-Dawro #language-Dawawa #language-Dhuwaya #language-Eastern Bontok #language-Koti #language-Mussau-Emira #language-Northern Emberá #language-English #language-Enga #language-Esperanto #language-Ogea #language-Ese Ejja #language-Northwest Alaska Inupiatun #language-Edolo #language-Ewe #language-Fasu #language-Faiwol #language-Fataleka #language-Maasina Fulfulde #language-Fore #language-French #language-Borgu Fulfulde #language-Pular #language-Western Niger Fulfulde #language-Alekano #language-Borei #language-Kandawo #language-Nobonob #language-Umanakaina #language-Wipi #language-Kire #language-Patpatar #language-Guhu-Samane #language-Gilaki #language-Gamo #language-Ngangam #language-Gumatj #language-Western Bolivian Guaraní #language-Gofa #language-Ancient Greek (to 1453) #language-Guajajára #language-Guahibo #language-Eastern Bolivian Guaraní #language-Gujarati #language-Sea Island Creole English #language-Guambiano #language-Mbyá Guaraní #language-Guayabero #language-Gunwinggu #language-Gourmanchéma #language-Guanano #language-Golin #language-Kuku-Yalanji #language-Gumawana #language-Gwichʼin #language-Ngäbere #language-Guarayu #language-Haitian #language-Hausa #language-Hawaiian #language-Ancient Hebrew #language-Huichol #language-Hebrew #language-Helong #language-Hindi #language-Hixkaryána #language-Halia #language-Matu Chin #language-Hiri Motu #language-Caribbean Hindustani #language-Hopi #language-Hote #language-Croatian #language-Minica Huitoto #language-Huambisa #language-Huli #language-Hungarian #language-Huastec #language-Murui Huitoto #language-San Mateo Del Mar Huave #language-Sabu #language-Iatmul #language-Ignaciano #language-Ika #language-Ikwere #language-Iloko #language-Imbongu #language-Inga #language-Indonesian #language-Inoke-Yate #language-Tuma-Irumu #language-Ipili #language-Isanzu #language-Italian #language-Sepik Iwam #language-Ixil #language-Popti' #language-Yabem #language-Yanyuwa #language-Tol #language-Bu (Kaduna State) #language-Shuar #language-Janji #language-Japanese #language-Caribbean Javanese #language-Kannada #language-Capanahua #language-Kadiwéu #language-Camsá #language-Iwal #language-Kamano #language-Kutu #language-Makonde #language-Tsikimba #language-Kekchí #language-Kenyang #language-West Kewa #language-Kube #language-Kaiwá #language-Kaingang #language-Kasua #language-Keapara #language-Kikuyu #language-Northeast Kiwai #language-Kisi #language-Kisar #language-Kunjen #language-East Kewa #language-Odoodee #language-Kosarek Yale #language-Nukna #language-Maskelynes #language-Kâte #language-Kalam #language-Limos Kalinga #language-Kwoma #language-Kamasau #language-Kanite #language-Kankanaey #language-Mankanya #language-Western Kanjobal #language-Tabo #language-Kosraean #language-Komba #language-Kapingamarangi #language-Karajá #language-Korafe-Yegha #language-Kobon #language-Mountain Koiali #language-Mum #language-Doromu-Koki #language-Kakabai #language-Kyenele #language-Kandas #language-Kuanua #language-Uare #language-Borong #language-Kurti #language-Kuot #language-'Auhelawa #language-Kuman (Papua New Guinea) #language-Kunimaipa #language-Kuni-Boazi #language-Border Kuna #language-Kwaio #language-Kwara'ae #language-Awa-Cuaiquer #language-Kwanga #language-Kyaka #language-Kouya #language-Keyagana #language-Kenga #language-Kayabí #language-Kosena #language-Lacandon #language-Latin #language-Label #language-Central Bontok #language-Tungag #language-Kara (Papua New Guinea) #language-Luang #language-Wala #language-Nyindrou #language-Limbu #language-Lingala #language-Lithuanian #language-Lole #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lewo #language-San Jerónimo Tecóatl Mazatec #language-Jalapa De Díaz Mazatec #language-Malayalam #language-Mam #language-Chiquihuitlán Mazatec #language-Marathi #language-Huautla Mazatec #language-Sateré-Mawé #language-Central Mazahua #language-Western Bukidnon Manobo #language-Macushi #language-Mangseng #language-Nadëb #language-Maxakalí #language-Sarangani Manobo #language-Matigsalug Manobo #language-Maca #language-Machiguenga #language-Sharanahua #language-Matsés #language-Coatlán Mixe #language-Makaa #language-Ese #language-Menya #language-Male (Ethiopia) #language-Melpa #language-Mengen #language-Mekeo #language-Merey #language-Mato #language-Motu #language-Morokodo #language-Makhuwa-Meetto #language-Matumbi #language-Mauwake #language-Atatláhuca Mixtec #language-Mi'kmaq #language-Ocotepec Mixtec #language-San Miguel El Grande Mixtec #language-Chayuco Mixtec #language-Peñoles Mixtec #language-Pinotepa Nacional Mixtec #language-Isthmus Mixe #language-Southern Puebla Mixtec #language-Coatzospan Mixtec #language-San Juan Colorado Mixtec #language-Mokilese #language-Mokole #language-Kupang Malay #language-Silacayoapan Mixtec #language-Manambu #language-Mape #language-Bargam #language-Mangga Buang #language-Madak #language-Mbula #language-Mopán Maya #language-Molima #language-Maung #language-Martu Wangka #language-Yosondúa Mixtec #language-Migabac #language-Dadibi #language-Mian #language-Misima-Panaeati #language-Mbuko #language-Mamasa #language-Masbatenyo #language-Sankaran Maninka #language-Mansaka #language-Agusan Manobo #language-Aruamu #language-Maiwa (Papua New Guinea) #language-Totontepec Mixe #language-Bo-Ung #language-Muyang #language-Manam #language-Minaveha #language-Are #language-Mwera (Chimwera) #language-Murrinh-Patha #language-Kala Lagaw Ya #language-Tezoatlán Mixtec #language-Tlahuitoltepec Mixe #language-Juquila Mixe #language-Jamiltepec Mixtec #language-Burmese #language-Mamara Senoufo #language-Mundurukú #language-Muyuw #language-Macuna #language-Maiadomu #language-Southern Nambikuára #language-Nabak #language-Nakanai #language-Naasioi #language-Ngarrindjeri #language-Nggem #language-Iyo #language-Central Huasteca Nahuatl #language-Northern Puebla Nahuatl #language-Michoacán Nahuatl #language-Chumburung #language-Ndengereko #language-Ndamba #language-Dhao #language-Ngulu #language-Guerrero Nahuatl #language-Eastern Huasteca Nahuatl #language-Tetelcingo Nahuatl #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Takuu #language-Naro #language-Noone #language-Western Huasteca Nahuatl #language-Northern Oaxaca Nahuatl #language-Nek #language-Nii #language-Ninzo #language-Nkonya #language-Dutch #language-Gela #language-Nimoa #language-Nyangumarta #language-Ngindo #language-Woun Meu #language-Numanggang #language-Nomatsiguenga #language-Ewage-Notu #language-Nepali (individual language) #language-Southeastern Puebla Nahuatl #language-Nehan #language-Nali #language-Ngaanyatjarra #language-Northern Tepehuan #language-Natügu #language-Nunggubuyu #language-Namiae #language-Southwest Tanna #language-Nyanja #language-Nyungar #language-Nyungwe #language-Obo Manobo #language-Orokaiva #language-South Tairora #language-Olo #language-Ono #language-Tohono O'odham #language-Oksapmin #language-Odia #language-Mezquital Otomi #language-Eastern Highland Otomi #language-Tenango Otomi #language-Querétaro Otomi #language-Estado de México Otomi #language-Parecís #language-Paumarí #language-Tenharim #language-Panjabi #language-Northern Paiute #language-Iranian Persian #language-Yine #language-Piapoco #language-Piratapuyo #language-Pintupi-Luritja #language-Pitjantjatjara #language-San Marcos Tlacoyalco Popoloca #language-Palikúr #language-Paama #language-San Juan Atzingo Popoloca #language-Poqomchi' #language-Highland Popoluca #language-Polish #language-Pohnpeian #language-Portuguese #language-Pogolo #language-Folopa #language-Paranan #language-Paicî #language-Patep #language-Bambam #language-Gapapaiwa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-South Bolivian Quechua #language-North Bolivian Quechua #language-Southern Pastaza Quechua #language-Cajamarca Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-North Junín Quechua #language-San Martín Quechua #language-Huaylla Wanca Quechua #language-Northern Pastaza Quichua #language-Huaylas Ancash Quechua #language-Panao Huánuco Quechua #language-Northern Conchucos Ancash Quechua #language-Southern Conchucos Ancash Quechua #language-Ramoaaina #language-Kara (Tanzania) #language-Ringgou #language-Rikbaktsa #language-Carpathian Romani #language-Vlax Romani #language-Romanian #language-Rotokas #language-Kriol #language-Dela-Oenale #language-Waima #language-Luguru #language-Roviana #language-Russian #language-Rawa #language-Buglere #language-Sanskrit #language-Saliba #language-Safwa #language-Subiya #language-Sena #language-Secoya #language-Mag-antsi Ayta #language-Sursurunga #language-Shatt #language-Shipibo-Conibo #language-Mende (Papua New Guinea) #language-Epena #language-Salt-Yui #language-Bolinao #language-Sinaugoro #language-Siona #language-Siane #language-Sam #language-Saniyo-Hiyewe #language-Somali #language-Kanasi #language-Miyobe #language-Spanish #language-Selepet #language-Akukem #language-Supyire Senoufo #language-Saposa #language-Sabaot #language-Siriano #language-Saramaccan #language-Sranan Tongo #language-Serbian #language-Sirionó #language-Siroi #language-Seimat #language-Samberigi #language-Southeastern Tepehuan #language-Sulka #language-Suena #language-Susu #language-Sunwar #language-Swedish #language-Swahili (individual language) #language-Suau #language-Suba #language-Lowland Tarahumara #language-Eastern Tamang #language-Tamil #language-Tatuyo #language-Tai #language-Takia #language-Mandara #language-North Tairora #language-Tboli #language-Tawala #language-Ditammari #language-Ticuna #language-Torres Strait Creole #language-Thado Chin #language-Tetun Dili #language-Huehuetla Tepehua #language-Telugu #language-Tereno #language-Tetum #language-Tewa (USA) #language-Teribe #language-Tajik #language-Tagalog #language-Sudest #language-Tangoa #language-Thai #language-Kuuk Thaayorre #language-Tifal #language-Timbe #language-Tiwi #language-Tiruray #language-Takwane #language-Upper Necaxa Totonac #language-Telefol #language-Haruai #language-Tacana #language-Tanimuca-Retuarã #language-Kwamera #language-North Tanna #language-Whitesands #language-Coyutla Totonac #language-Toma #language-Gizrra #language-Tojolabal #language-Tonga (Tonga Islands) #language-Xicotepec De Juárez Totonac #language-Papantla Totonac #language-Highland Totonac #language-Taupota #language-Tok Pisin #language-Tlachichilco Tepehua #language-Tinputz #language-Copala Triqui #language-Tsishingini #language-Tektiteko #language-Bwanabwana #language-Mutu #language-Tuyuca #language-Central Tunebo #language-Tucano #language-Turkish #language-Southeast Ambrym #language-Twi #language-Tii #language-Kayapó #language-Tz'utujil #language-Tzotzil #language-Ubir #language-Umbu-Ungu #language-Uduk #language-Uighur #language-Ukrainian #language-Ulithian #language-Meriam Mir #language-Uripiv-Wala-Rano-Atchin #language-Urarina #language-Urubú-Kaapor #language-Urdu #language-Urim #language-Urat #language-Sop #language-Usarufa #language-Uspanteco #language-Uri #language-Lote #language-Vidunda #language-Vietnamese #language-Iduna #language-Ayautla Mazatec #language-Waffa #language-Wolaytta #language-Wapishana #language-Kaninuwa #language-Vwanji #language-Warlpiri #language-Wedau #language-Weri #language-Wik-Mungkan #language-Wiru #language-Vitu #language-Walmajarri #language-Mwani #language-Wantoat #language-Usan #language-Wolof #language-Hanga Hundi #language-Garrwa #language-Worrorra #language-Waris #language-Waskia #language-Wuvulu-Aua #language-Xavánte #language-Kombio #language-Hdi #language-Kamula #language-Northern Kankanay #language-Konkomba #language-Sio #language-Diuxi-Tilantongo Mixtec #language-Magdalena Peñasco Mixtec #language-Yaminahua #language-Yagua #language-Yalunka #language-Yapese #language-Yaqui #language-Yaweyuha #language-Yucuna #language-Yakan #language-Yele #language-Iamalele #language-Yongkom #language-Yoruba #language-Yareba #language-Yaouré #language-Yessan-Mayo #language-Karkar-Yuri #language-Yopno #language-Yau (Morobe Province) #language-Yawa #language-Sierra de Juárez Zapotec #language-Western Tlacolula Valley Zapotec #language-Ocotlán Zapotec #language-Cajonos Zapotec #language-Isthmus Zapotec #language-Zaramo #language-Miahuatlán Zapotec #language-Ozolotepec Zapotec #language-Zapotec #language-Rincón Zapotec #language-Santo Domingo Albarradas Zapotec #language-Tabaa Zapotec #language-Yatzachi Zapotec #language-Mitla Zapotec #language-Coatecas Altas Zapotec #language-Kinga #language-Zia #language-Zigula #language-Malay (individual language) #language-Francisco León Zoque #language-Choapan Zapotec #language-Lachixío Zapotec #language-Mixtepec Zapotec #language-Amatlán Zapotec #language-Zoogocho Zapotec #language-Yalálag Zapotec #language-Chichicapan Zapotec #language-Texmelucan Zapotec #language-Southern Rincon Zapotec #language-Quioquitani-Quierí Zapotec #language-Yatee Zapotec #language-Zyphe Chin #language-Belarusian #language-Breton #language-Czech #language-Chamorro #language-Chinese #language-German #language-English #language-Esperanto #language-French #language-Haitian #language-Hebrew #language-Croatian #language-Indonesian #language-Italian #language-Japanese #language-Latin #language-Dutch #language-Russian #language-Sanskrit #language-Somali #language-Spanish #language-Serbian #language-Swedish #language-Tonga (Tonga Islands) #language-Ukrainian #language-Vietnamese #license-cc-by-4.0 #license-other #region-us
|
# Dataset Card for BibleNLP Corpus
### Dataset Summary
Partial and complete Bible translations in 833 languages, aligned by verse.
### Languages
aai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc, aui, auy, avt, awb, awk, awx, azb, azg, azz, bao, bba, bbb, bbr, bch, bco, bdd, bea, bef, bel, ben, beo, beu, bgs, bgt, bhg, bhl, big, bjk, bjp, bjr, bjv, bjz, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bmu, bnp, boa, boj, bon, box, bpr, bps, bqc, bqp, bre, bsj, bsn, bsp, bss, buk, bus, bvd, bvr, bxh, byr, byx, bzd, bzh, bzj, caa, cab, cac, caf, cak, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ceb, cek, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, ckb, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, cpy, crn, crx, cso, csy, cta, cth, ctp, ctu, cub, cuc, cui, cuk, cut, cux, cwe, cya, daa, dad, dah, dan, ded, deu, dgc, dgr, dgz, dhg, dif, dik, dji, djk, djr, dob, dop, dov, dwr, dww, dwy, ebk, eko, emi, emp, eng, enq, epo, eri, ese, esk, etr, ewe, faa, fai, far, ffm, for, fra, fue, fuf, fuh, gah, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, guj, gul, gum, gun, guo, gup, gux, gvc, gvf, gvn, gvs, gwi, gym, gyr, hat, hau, haw, hbo, hch, heb, heg, hin, hix, hla, hlt, hmo, hns, hop, hot, hrv, hto, hub, hui, hun, hus, huu, huv, hvn, ian, ign, ikk, ikw, ilo, imo, inb, ind, ino, iou, ipi, isn, ita, iws, ixl, jac, jae, jao, jic, jid, jiv, jni, jpn, jvn, kan, kaq, kbc, kbh, kbm, kbq, kdc, kde, kdl, kek, ken, kew, kgf, kgk, kgp, khs, khz, kik, kiw, kiz, kje, kjn, kjs, kkc, kkl, klt, klv, kmg, kmh, kmk, kmo, kms, kmu, kne, knf, knj, knv, kos, kpf, kpg, kpj, kpr, kpw, kpx, kqa, kqc, kqf, kql, kqw, ksd, ksj, ksr, ktm, kto, kud, kue, kup, kvg, kvn, kwd, kwf, kwi, kwj, kyc, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, lbk, lcm, leu, lex, lgl, lid, lif, lin, lit, llg, lug, luo, lww, maa, maj, mal, mam, maq, mar, mau, mav, maz, mbb, mbc, mbh, mbj, mbl, mbs, mbt, mca, mcb, mcd, mcf, mco, mcp, mcq, mcr, mdy, med, mee, mek, meq, met, meu, mgc, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkj, mkl, mkn, mks, mle, mlh, mlp, mmo, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpt, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, mto, mux, muy, mva, mvn, mwc, mwe, mwf, mwp, mxb, mxp, mxq, mxt, mya, myk, myu, myw, myy, mzz, nab, naf, nak, nas, nay, nbq, nca, nch, ncj, ncl, ncu, ndg, ndj, nfa, ngp, ngu, nhe, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nii, nin, nko, nld, nlg, nmw, nna, nnq, noa, nop, not, nou, npi, npl, nsn, nss, ntj, ntp, ntu, nuy, nvm, nwi, nya, nys, nyu, obo, okv, omw, ong, ons, ood, opm, ory, ote, otm, otn, otq, ots, pab, pad, pah, pan, pao, pes, pib, pio, pir, piu, pjt, pls, plu, pma, poe, poh, poi, pol, pon, por, poy, ppo, prf, pri, ptp, ptu, pwg, qub, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, reg, rgu, rkb, rmc, rmy, ron, roo, rop, row, rro, ruf, rug, rus, rwo, sab, san, sbe, sbk, sbs, seh, sey, sgb, sgz, shj, shp, sim, sja, sll, smk, snc, snn, snp, snx, sny, som, soq, soy, spa, spl, spm, spp, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, taj, tam, tav, taw, tbc, tbf, tbg, tbl, tbo, tbz, tca, tcs, tcz, tdt, tee, tel, ter, tet, tew, tfr, tgk, tgl, tgo, tgp, tha, thd, tif, tim, tiw, tiy, tke, tku, tlf, tmd, tna, tnc, tnk, tnn, tnp, toc, tod, tof, toj, ton, too, top, tos, tpa, tpi, tpt, tpz, trc, tsw, ttc, tte, tuc, tue, tuf, tuo, tur, tvk, twi, txq, txu, tzj, tzo, ubr, ubu, udu, uig, ukr, uli, ulk, upv, ura, urb, urd, uri, urt, urw, usa, usp, uvh, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbi, wbp, wed, wer, wim, wiu, wiv, wmt, wmw, wnc, wnu, wol, wos, wrk, wro, wrs, wsk, wuv, xav, xbi, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yle, yml, yon, yor, yrb, yre, yss, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zap, zar, zas, zat, zav, zaw, zca, zga, zia, ziw, zlm, zos, zpc, zpl, zpm, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
## Dataset Structure
### Data Fields
translation
- languages - an N length list of the languages of the translations, sorted alphabetically
- translation - an N length list with the translations each corresponding to the language specified in the above field
files
- lang - an N length list of the languages of the files, in order of input
- file - an N length list of the filenames from the corpus on github, each corresponding with the lang above
ref - the verse(s) contained in the record, as a list, with each represented with: ''<a three letter book code> <chapter number>:<verse number>''
licenses - an N length list of licenses, corresponding to the list of files above
copyrights - information on copyright holders, corresponding to the list of files above
### Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as ''languages = ['eng', 'fra']''.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ''pair='single'''. If only the maximum range pairing is desired use ''pair='range''' (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
## Sources
URL | [
"# Dataset Card for BibleNLP Corpus",
"### Dataset Summary\nPartial and complete Bible translations in 833 languages, aligned by verse.",
"### Languages\naai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc, aui, auy, avt, awb, awk, awx, azb, azg, azz, bao, bba, bbb, bbr, bch, bco, bdd, bea, bef, bel, ben, beo, beu, bgs, bgt, bhg, bhl, big, bjk, bjp, bjr, bjv, bjz, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bmu, bnp, boa, boj, bon, box, bpr, bps, bqc, bqp, bre, bsj, bsn, bsp, bss, buk, bus, bvd, bvr, bxh, byr, byx, bzd, bzh, bzj, caa, cab, cac, caf, cak, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ceb, cek, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, ckb, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, cpy, crn, crx, cso, csy, cta, cth, ctp, ctu, cub, cuc, cui, cuk, cut, cux, cwe, cya, daa, dad, dah, dan, ded, deu, dgc, dgr, dgz, dhg, dif, dik, dji, djk, djr, dob, dop, dov, dwr, dww, dwy, ebk, eko, emi, emp, eng, enq, epo, eri, ese, esk, etr, ewe, faa, fai, far, ffm, for, fra, fue, fuf, fuh, gah, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, guj, gul, gum, gun, guo, gup, gux, gvc, gvf, gvn, gvs, gwi, gym, gyr, hat, hau, haw, hbo, hch, heb, heg, hin, hix, hla, hlt, hmo, hns, hop, hot, hrv, hto, hub, hui, hun, hus, huu, huv, hvn, ian, ign, ikk, ikw, ilo, imo, inb, ind, ino, iou, ipi, isn, ita, iws, ixl, jac, jae, jao, jic, jid, jiv, jni, jpn, jvn, kan, kaq, kbc, kbh, kbm, kbq, kdc, kde, kdl, kek, ken, kew, kgf, kgk, kgp, khs, khz, kik, kiw, kiz, kje, kjn, kjs, kkc, kkl, klt, klv, kmg, kmh, kmk, kmo, kms, kmu, kne, knf, knj, knv, kos, kpf, kpg, kpj, kpr, kpw, kpx, kqa, kqc, kqf, kql, kqw, ksd, ksj, ksr, ktm, kto, kud, kue, kup, kvg, kvn, kwd, kwf, kwi, kwj, kyc, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, lbk, lcm, leu, lex, lgl, lid, lif, lin, lit, llg, lug, luo, lww, maa, maj, mal, mam, maq, mar, mau, mav, maz, mbb, mbc, mbh, mbj, mbl, mbs, mbt, mca, mcb, mcd, mcf, mco, mcp, mcq, mcr, mdy, med, mee, mek, meq, met, meu, mgc, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkj, mkl, mkn, mks, mle, mlh, mlp, mmo, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpt, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, mto, mux, muy, mva, mvn, mwc, mwe, mwf, mwp, mxb, mxp, mxq, mxt, mya, myk, myu, myw, myy, mzz, nab, naf, nak, nas, nay, nbq, nca, nch, ncj, ncl, ncu, ndg, ndj, nfa, ngp, ngu, nhe, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nii, nin, nko, nld, nlg, nmw, nna, nnq, noa, nop, not, nou, npi, npl, nsn, nss, ntj, ntp, ntu, nuy, nvm, nwi, nya, nys, nyu, obo, okv, omw, ong, ons, ood, opm, ory, ote, otm, otn, otq, ots, pab, pad, pah, pan, pao, pes, pib, pio, pir, piu, pjt, pls, plu, pma, poe, poh, poi, pol, pon, por, poy, ppo, prf, pri, ptp, ptu, pwg, qub, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, reg, rgu, rkb, rmc, rmy, ron, roo, rop, row, rro, ruf, rug, rus, rwo, sab, san, sbe, sbk, sbs, seh, sey, sgb, sgz, shj, shp, sim, sja, sll, smk, snc, snn, snp, snx, sny, som, soq, soy, spa, spl, spm, spp, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, taj, tam, tav, taw, tbc, tbf, tbg, tbl, tbo, tbz, tca, tcs, tcz, tdt, tee, tel, ter, tet, tew, tfr, tgk, tgl, tgo, tgp, tha, thd, tif, tim, tiw, tiy, tke, tku, tlf, tmd, tna, tnc, tnk, tnn, tnp, toc, tod, tof, toj, ton, too, top, tos, tpa, tpi, tpt, tpz, trc, tsw, ttc, tte, tuc, tue, tuf, tuo, tur, tvk, twi, txq, txu, tzj, tzo, ubr, ubu, udu, uig, ukr, uli, ulk, upv, ura, urb, urd, uri, urt, urw, usa, usp, uvh, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbi, wbp, wed, wer, wim, wiu, wiv, wmt, wmw, wnc, wnu, wol, wos, wrk, wro, wrs, wsk, wuv, xav, xbi, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yle, yml, yon, yor, yrb, yre, yss, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zap, zar, zas, zat, zav, zaw, zca, zga, zia, ziw, zlm, zos, zpc, zpl, zpm, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp",
"## Dataset Structure",
"### Data Fields\n\ntranslation \n- languages - an N length list of the languages of the translations, sorted alphabetically\n- translation - an N length list with the translations each corresponding to the language specified in the above field\nfiles\n- lang - an N length list of the languages of the files, in order of input\n- file - an N length list of the filenames from the corpus on github, each corresponding with the lang above\nref - the verse(s) contained in the record, as a list, with each represented with: ''<a three letter book code> <chapter number>:<verse number>'' \n\nlicenses - an N length list of licenses, corresponding to the list of files above\n\ncopyrights - information on copyright holders, corresponding to the list of files above",
"### Usage\n\nThe dataset loading script requires installation of tqdm, ijson, and numpy\n\nSpecify the languages to be paired with a list and ISO 693-3 language codes, such as ''languages = ['eng', 'fra']''.\nBy default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ''pair='single'''. If only the maximum range pairing is desired use ''pair='range''' (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).",
"## Sources\nURL"
] | [
"TAGS\n#task_categories-translation #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-translation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Arifama-Miniafia #language-Ankave #language-Abau #language-Amarasi #language-Ambulas #language-Inabaknon #language-Aneme Wake #language-Saint Lucian Creole French #language-Achi #language-Achuar-Shiwiar #language-Adzera #language-Eastern Arrernte #language-Amele #language-Agarabi #language-Angor #language-Angaataha #language-Agutaynen #language-Aguaruna #language-Central Cagayan Agta #language-Aguacateco #language-Arosi #language-Assyrian Neo-Aramaic #language-Akan #language-Akawaio #language-Alune #language-Algonquin #language-Tosk Albanian #language-Alyawarr #language-Yanesha' #language-Hamer-Banna #language-Ambai #language-Ama (Papua New Guinea) #language-Amanab #language-Amo #language-Alamblak #language-Amarakaeri #language-Guerrero Amuzgo #language-Anmatyerre #language-Nend #language-Denya #language-Anindilyakwa #language-Mufian #language-Ömie #language-Bumbita Arapesh #language-Sa'a #language-Bukiyip #language-Apinayé #language-Arop-Lokep #language-Apurinã #language-Western Apache #language-Safeyoka #language-Standard Arabic #language-Western Arrarnta #language-Arabela #language-Mapudungun #language-Arapaho #language-Assamese #language-Dano #language-Pele-Ata #language-Zaiwa #language-Ata Manobo #language-Ivbie North-Okpela-Arhe #language-Pamplona Atta #language-Waorani #language-Anuki #language-Awiyaana #language-Au #language-Awa (Papua New Guinea) #language-Awabakal #language-Awara #language-South Azerbaijani #language-San Pedro Amuzgos Amuzgo #language-Highland Puebla Nahuatl #language-Waimaha #language-Baatonum #language-Barai #language-Girawa #language-Bariai #language-Kaluli #language-Bunama #language-Beaver #language-Benabena #language-Belarusian #language-Bengali #language-Beami #language-Blagar #language-Tagabawa #language-Bughotu #language-Binandere #language-Bimin #language-Biangai #language-Barok #language-Fanamaket #language-Binumarien #language-Bedjond #language-Baruga #language-Binukid #language-Baki #language-Bakairí #language-Baikeno #language-Siksika #language-Balangao #language-Balantak #language-Kein #language-Ghayavi #language-Muinane #language-Somba-Siawari #language-Bola #language-Bora #language-Anjam #language-Bine #language-Buamu #language-Koronadal Blaan #language-Sarangani Blaan #language-Boko (Benin) #language-Busa #language-Breton #language-Bangwinji #language-Barasana-Eduria #language-Baga Sitemu #language-Akoose #language-Bugawac #language-Bokobaru #language-Baeggu #language-Burarra #language-Buhutu #language-Baruya #language-Qaqet #language-Bribri #language-Mapos Buang #language-Belize Kriol English #language-Chortí #language-Garifuna #language-Chuj #language-Southern Carrier #language-Kaqchikel #language-Chácobo #language-Chipaya #language-Galibi Carib #language-Cavineña #language-Chiquitano #language-Carapana #language-Chachi #language-Chavacano #language-Cashibo-Cacataibo #language-Cashinahua #language-Chayahuita #language-Candoshi-Shapra #language-Cacua #language-Comaltepec Chinantec #language-Cebuano #language-Eastern Khumi Chin #language-Czech #language-Kagayanen #language-Chamorro #language-Highland Oaxaca Chontal #language-Tabasco Chontal #language-Chuukese #language-Quiotepec Chinantec #language-Ozumacín Chinantec #language-Ashéninka Pajonal #language-Chuave #language-Central Kurdish #language-Lealao Chinantec #language-Caluyanun #language-Cerma #language-Mandarin Chinese #language-Asháninka #language-Lalana Chinantec #language-Tepetotutla Chinantec #language-Colorado #language-Cofán #language-Coptic #language-Caquinte #language-Palantla Chinantec #language-Ucayali-Yurúa Ashéninka #language-Ajyíninka Apurucayali #language-Pichis Ashéninka #language-South Ucayali Ashéninka #language-El Nayar Cora #language-Carrier #language-Sochiapam Chinantec #language-Siyin Chin #language-Tataltepec Chatino #language-Thaiphum Chin #language-Western Highland Chatino #language-Chol #language-Cubeo #language-Usila Chinantec #language-Cuiba #language-San Blas Kuna #language-Teutila Cuicatec #language-Tepeuxila Cuicatec #language-Kwere #language-Nopala Chatino #language-Dangaléat #language-Marik #language-Gwahatike #language-Danish #language-Dedua #language-German #language-Casiguran Dumagat Agta #language-Dogrib #language-Daga #language-Dhangu-Djangu #language-Dieri #language-Southwestern Dinka #language-Djinang #language-Eastern Maroon Creole #language-Djambarrpuyngu #language-Dobu #language-Lukpa #language-Dombe #language-Dawro #language-Dawawa #language-Dhuwaya #language-Eastern Bontok #language-Koti #language-Mussau-Emira #language-Northern Emberá #language-English #language-Enga #language-Esperanto #language-Ogea #language-Ese Ejja #language-Northwest Alaska Inupiatun #language-Edolo #language-Ewe #language-Fasu #language-Faiwol #language-Fataleka #language-Maasina Fulfulde #language-Fore #language-French #language-Borgu Fulfulde #language-Pular #language-Western Niger Fulfulde #language-Alekano #language-Borei #language-Kandawo #language-Nobonob #language-Umanakaina #language-Wipi #language-Kire #language-Patpatar #language-Guhu-Samane #language-Gilaki #language-Gamo #language-Ngangam #language-Gumatj #language-Western Bolivian Guaraní #language-Gofa #language-Ancient Greek (to 1453) #language-Guajajára #language-Guahibo #language-Eastern Bolivian Guaraní #language-Gujarati #language-Sea Island Creole English #language-Guambiano #language-Mbyá Guaraní #language-Guayabero #language-Gunwinggu #language-Gourmanchéma #language-Guanano #language-Golin #language-Kuku-Yalanji #language-Gumawana #language-Gwichʼin #language-Ngäbere #language-Guarayu #language-Haitian #language-Hausa #language-Hawaiian #language-Ancient Hebrew #language-Huichol #language-Hebrew #language-Helong #language-Hindi #language-Hixkaryána #language-Halia #language-Matu Chin #language-Hiri Motu #language-Caribbean Hindustani #language-Hopi #language-Hote #language-Croatian #language-Minica Huitoto #language-Huambisa #language-Huli #language-Hungarian #language-Huastec #language-Murui Huitoto #language-San Mateo Del Mar Huave #language-Sabu #language-Iatmul #language-Ignaciano #language-Ika #language-Ikwere #language-Iloko #language-Imbongu #language-Inga #language-Indonesian #language-Inoke-Yate #language-Tuma-Irumu #language-Ipili #language-Isanzu #language-Italian #language-Sepik Iwam #language-Ixil #language-Popti' #language-Yabem #language-Yanyuwa #language-Tol #language-Bu (Kaduna State) #language-Shuar #language-Janji #language-Japanese #language-Caribbean Javanese #language-Kannada #language-Capanahua #language-Kadiwéu #language-Camsá #language-Iwal #language-Kamano #language-Kutu #language-Makonde #language-Tsikimba #language-Kekchí #language-Kenyang #language-West Kewa #language-Kube #language-Kaiwá #language-Kaingang #language-Kasua #language-Keapara #language-Kikuyu #language-Northeast Kiwai #language-Kisi #language-Kisar #language-Kunjen #language-East Kewa #language-Odoodee #language-Kosarek Yale #language-Nukna #language-Maskelynes #language-Kâte #language-Kalam #language-Limos Kalinga #language-Kwoma #language-Kamasau #language-Kanite #language-Kankanaey #language-Mankanya #language-Western Kanjobal #language-Tabo #language-Kosraean #language-Komba #language-Kapingamarangi #language-Karajá #language-Korafe-Yegha #language-Kobon #language-Mountain Koiali #language-Mum #language-Doromu-Koki #language-Kakabai #language-Kyenele #language-Kandas #language-Kuanua #language-Uare #language-Borong #language-Kurti #language-Kuot #language-'Auhelawa #language-Kuman (Papua New Guinea) #language-Kunimaipa #language-Kuni-Boazi #language-Border Kuna #language-Kwaio #language-Kwara'ae #language-Awa-Cuaiquer #language-Kwanga #language-Kyaka #language-Kouya #language-Keyagana #language-Kenga #language-Kayabí #language-Kosena #language-Lacandon #language-Latin #language-Label #language-Central Bontok #language-Tungag #language-Kara (Papua New Guinea) #language-Luang #language-Wala #language-Nyindrou #language-Limbu #language-Lingala #language-Lithuanian #language-Lole #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lewo #language-San Jerónimo Tecóatl Mazatec #language-Jalapa De Díaz Mazatec #language-Malayalam #language-Mam #language-Chiquihuitlán Mazatec #language-Marathi #language-Huautla Mazatec #language-Sateré-Mawé #language-Central Mazahua #language-Western Bukidnon Manobo #language-Macushi #language-Mangseng #language-Nadëb #language-Maxakalí #language-Sarangani Manobo #language-Matigsalug Manobo #language-Maca #language-Machiguenga #language-Sharanahua #language-Matsés #language-Coatlán Mixe #language-Makaa #language-Ese #language-Menya #language-Male (Ethiopia) #language-Melpa #language-Mengen #language-Mekeo #language-Merey #language-Mato #language-Motu #language-Morokodo #language-Makhuwa-Meetto #language-Matumbi #language-Mauwake #language-Atatláhuca Mixtec #language-Mi'kmaq #language-Ocotepec Mixtec #language-San Miguel El Grande Mixtec #language-Chayuco Mixtec #language-Peñoles Mixtec #language-Pinotepa Nacional Mixtec #language-Isthmus Mixe #language-Southern Puebla Mixtec #language-Coatzospan Mixtec #language-San Juan Colorado Mixtec #language-Mokilese #language-Mokole #language-Kupang Malay #language-Silacayoapan Mixtec #language-Manambu #language-Mape #language-Bargam #language-Mangga Buang #language-Madak #language-Mbula #language-Mopán Maya #language-Molima #language-Maung #language-Martu Wangka #language-Yosondúa Mixtec #language-Migabac #language-Dadibi #language-Mian #language-Misima-Panaeati #language-Mbuko #language-Mamasa #language-Masbatenyo #language-Sankaran Maninka #language-Mansaka #language-Agusan Manobo #language-Aruamu #language-Maiwa (Papua New Guinea) #language-Totontepec Mixe #language-Bo-Ung #language-Muyang #language-Manam #language-Minaveha #language-Are #language-Mwera (Chimwera) #language-Murrinh-Patha #language-Kala Lagaw Ya #language-Tezoatlán Mixtec #language-Tlahuitoltepec Mixe #language-Juquila Mixe #language-Jamiltepec Mixtec #language-Burmese #language-Mamara Senoufo #language-Mundurukú #language-Muyuw #language-Macuna #language-Maiadomu #language-Southern Nambikuára #language-Nabak #language-Nakanai #language-Naasioi #language-Ngarrindjeri #language-Nggem #language-Iyo #language-Central Huasteca Nahuatl #language-Northern Puebla Nahuatl #language-Michoacán Nahuatl #language-Chumburung #language-Ndengereko #language-Ndamba #language-Dhao #language-Ngulu #language-Guerrero Nahuatl #language-Eastern Huasteca Nahuatl #language-Tetelcingo Nahuatl #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Takuu #language-Naro #language-Noone #language-Western Huasteca Nahuatl #language-Northern Oaxaca Nahuatl #language-Nek #language-Nii #language-Ninzo #language-Nkonya #language-Dutch #language-Gela #language-Nimoa #language-Nyangumarta #language-Ngindo #language-Woun Meu #language-Numanggang #language-Nomatsiguenga #language-Ewage-Notu #language-Nepali (individual language) #language-Southeastern Puebla Nahuatl #language-Nehan #language-Nali #language-Ngaanyatjarra #language-Northern Tepehuan #language-Natügu #language-Nunggubuyu #language-Namiae #language-Southwest Tanna #language-Nyanja #language-Nyungar #language-Nyungwe #language-Obo Manobo #language-Orokaiva #language-South Tairora #language-Olo #language-Ono #language-Tohono O'odham #language-Oksapmin #language-Odia #language-Mezquital Otomi #language-Eastern Highland Otomi #language-Tenango Otomi #language-Querétaro Otomi #language-Estado de México Otomi #language-Parecís #language-Paumarí #language-Tenharim #language-Panjabi #language-Northern Paiute #language-Iranian Persian #language-Yine #language-Piapoco #language-Piratapuyo #language-Pintupi-Luritja #language-Pitjantjatjara #language-San Marcos Tlacoyalco Popoloca #language-Palikúr #language-Paama #language-San Juan Atzingo Popoloca #language-Poqomchi' #language-Highland Popoluca #language-Polish #language-Pohnpeian #language-Portuguese #language-Pogolo #language-Folopa #language-Paranan #language-Paicî #language-Patep #language-Bambam #language-Gapapaiwa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-South Bolivian Quechua #language-North Bolivian Quechua #language-Southern Pastaza Quechua #language-Cajamarca Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-North Junín Quechua #language-San Martín Quechua #language-Huaylla Wanca Quechua #language-Northern Pastaza Quichua #language-Huaylas Ancash Quechua #language-Panao Huánuco Quechua #language-Northern Conchucos Ancash Quechua #language-Southern Conchucos Ancash Quechua #language-Ramoaaina #language-Kara (Tanzania) #language-Ringgou #language-Rikbaktsa #language-Carpathian Romani #language-Vlax Romani #language-Romanian #language-Rotokas #language-Kriol #language-Dela-Oenale #language-Waima #language-Luguru #language-Roviana #language-Russian #language-Rawa #language-Buglere #language-Sanskrit #language-Saliba #language-Safwa #language-Subiya #language-Sena #language-Secoya #language-Mag-antsi Ayta #language-Sursurunga #language-Shatt #language-Shipibo-Conibo #language-Mende (Papua New Guinea) #language-Epena #language-Salt-Yui #language-Bolinao #language-Sinaugoro #language-Siona #language-Siane #language-Sam #language-Saniyo-Hiyewe #language-Somali #language-Kanasi #language-Miyobe #language-Spanish #language-Selepet #language-Akukem #language-Supyire Senoufo #language-Saposa #language-Sabaot #language-Siriano #language-Saramaccan #language-Sranan Tongo #language-Serbian #language-Sirionó #language-Siroi #language-Seimat #language-Samberigi #language-Southeastern Tepehuan #language-Sulka #language-Suena #language-Susu #language-Sunwar #language-Swedish #language-Swahili (individual language) #language-Suau #language-Suba #language-Lowland Tarahumara #language-Eastern Tamang #language-Tamil #language-Tatuyo #language-Tai #language-Takia #language-Mandara #language-North Tairora #language-Tboli #language-Tawala #language-Ditammari #language-Ticuna #language-Torres Strait Creole #language-Thado Chin #language-Tetun Dili #language-Huehuetla Tepehua #language-Telugu #language-Tereno #language-Tetum #language-Tewa (USA) #language-Teribe #language-Tajik #language-Tagalog #language-Sudest #language-Tangoa #language-Thai #language-Kuuk Thaayorre #language-Tifal #language-Timbe #language-Tiwi #language-Tiruray #language-Takwane #language-Upper Necaxa Totonac #language-Telefol #language-Haruai #language-Tacana #language-Tanimuca-Retuarã #language-Kwamera #language-North Tanna #language-Whitesands #language-Coyutla Totonac #language-Toma #language-Gizrra #language-Tojolabal #language-Tonga (Tonga Islands) #language-Xicotepec De Juárez Totonac #language-Papantla Totonac #language-Highland Totonac #language-Taupota #language-Tok Pisin #language-Tlachichilco Tepehua #language-Tinputz #language-Copala Triqui #language-Tsishingini #language-Tektiteko #language-Bwanabwana #language-Mutu #language-Tuyuca #language-Central Tunebo #language-Tucano #language-Turkish #language-Southeast Ambrym #language-Twi #language-Tii #language-Kayapó #language-Tz'utujil #language-Tzotzil #language-Ubir #language-Umbu-Ungu #language-Uduk #language-Uighur #language-Ukrainian #language-Ulithian #language-Meriam Mir #language-Uripiv-Wala-Rano-Atchin #language-Urarina #language-Urubú-Kaapor #language-Urdu #language-Urim #language-Urat #language-Sop #language-Usarufa #language-Uspanteco #language-Uri #language-Lote #language-Vidunda #language-Vietnamese #language-Iduna #language-Ayautla Mazatec #language-Waffa #language-Wolaytta #language-Wapishana #language-Kaninuwa #language-Vwanji #language-Warlpiri #language-Wedau #language-Weri #language-Wik-Mungkan #language-Wiru #language-Vitu #language-Walmajarri #language-Mwani #language-Wantoat #language-Usan #language-Wolof #language-Hanga Hundi #language-Garrwa #language-Worrorra #language-Waris #language-Waskia #language-Wuvulu-Aua #language-Xavánte #language-Kombio #language-Hdi #language-Kamula #language-Northern Kankanay #language-Konkomba #language-Sio #language-Diuxi-Tilantongo Mixtec #language-Magdalena Peñasco Mixtec #language-Yaminahua #language-Yagua #language-Yalunka #language-Yapese #language-Yaqui #language-Yaweyuha #language-Yucuna #language-Yakan #language-Yele #language-Iamalele #language-Yongkom #language-Yoruba #language-Yareba #language-Yaouré #language-Yessan-Mayo #language-Karkar-Yuri #language-Yopno #language-Yau (Morobe Province) #language-Yawa #language-Sierra de Juárez Zapotec #language-Western Tlacolula Valley Zapotec #language-Ocotlán Zapotec #language-Cajonos Zapotec #language-Isthmus Zapotec #language-Zaramo #language-Miahuatlán Zapotec #language-Ozolotepec Zapotec #language-Zapotec #language-Rincón Zapotec #language-Santo Domingo Albarradas Zapotec #language-Tabaa Zapotec #language-Yatzachi Zapotec #language-Mitla Zapotec #language-Coatecas Altas Zapotec #language-Kinga #language-Zia #language-Zigula #language-Malay (individual language) #language-Francisco León Zoque #language-Choapan Zapotec #language-Lachixío Zapotec #language-Mixtepec Zapotec #language-Amatlán Zapotec #language-Zoogocho Zapotec #language-Yalálag Zapotec #language-Chichicapan Zapotec #language-Texmelucan Zapotec #language-Southern Rincon Zapotec #language-Quioquitani-Quierí Zapotec #language-Yatee Zapotec #language-Zyphe Chin #language-Belarusian #language-Breton #language-Czech #language-Chamorro #language-Chinese #language-German #language-English #language-Esperanto #language-French #language-Haitian #language-Hebrew #language-Croatian #language-Indonesian #language-Italian #language-Japanese #language-Latin #language-Dutch #language-Russian #language-Sanskrit #language-Somali #language-Spanish #language-Serbian #language-Swedish #language-Tonga (Tonga Islands) #language-Ukrainian #language-Vietnamese #license-cc-by-4.0 #license-other #region-us \n",
"# Dataset Card for BibleNLP Corpus",
"### Dataset Summary\nPartial and complete Bible translations in 833 languages, aligned by verse.",
"### Languages\naai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc, aui, auy, avt, awb, awk, awx, azb, azg, azz, bao, bba, bbb, bbr, bch, bco, bdd, bea, bef, bel, ben, beo, beu, bgs, bgt, bhg, bhl, big, bjk, bjp, bjr, bjv, bjz, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bmu, bnp, boa, boj, bon, box, bpr, bps, bqc, bqp, bre, bsj, bsn, bsp, bss, buk, bus, bvd, bvr, bxh, byr, byx, bzd, bzh, bzj, caa, cab, cac, caf, cak, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ceb, cek, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, ckb, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, cpy, crn, crx, cso, csy, cta, cth, ctp, ctu, cub, cuc, cui, cuk, cut, cux, cwe, cya, daa, dad, dah, dan, ded, deu, dgc, dgr, dgz, dhg, dif, dik, dji, djk, djr, dob, dop, dov, dwr, dww, dwy, ebk, eko, emi, emp, eng, enq, epo, eri, ese, esk, etr, ewe, faa, fai, far, ffm, for, fra, fue, fuf, fuh, gah, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, guj, gul, gum, gun, guo, gup, gux, gvc, gvf, gvn, gvs, gwi, gym, gyr, hat, hau, haw, hbo, hch, heb, heg, hin, hix, hla, hlt, hmo, hns, hop, hot, hrv, hto, hub, hui, hun, hus, huu, huv, hvn, ian, ign, ikk, ikw, ilo, imo, inb, ind, ino, iou, ipi, isn, ita, iws, ixl, jac, jae, jao, jic, jid, jiv, jni, jpn, jvn, kan, kaq, kbc, kbh, kbm, kbq, kdc, kde, kdl, kek, ken, kew, kgf, kgk, kgp, khs, khz, kik, kiw, kiz, kje, kjn, kjs, kkc, kkl, klt, klv, kmg, kmh, kmk, kmo, kms, kmu, kne, knf, knj, knv, kos, kpf, kpg, kpj, kpr, kpw, kpx, kqa, kqc, kqf, kql, kqw, ksd, ksj, ksr, ktm, kto, kud, kue, kup, kvg, kvn, kwd, kwf, kwi, kwj, kyc, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, lbk, lcm, leu, lex, lgl, lid, lif, lin, lit, llg, lug, luo, lww, maa, maj, mal, mam, maq, mar, mau, mav, maz, mbb, mbc, mbh, mbj, mbl, mbs, mbt, mca, mcb, mcd, mcf, mco, mcp, mcq, mcr, mdy, med, mee, mek, meq, met, meu, mgc, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkj, mkl, mkn, mks, mle, mlh, mlp, mmo, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpt, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, mto, mux, muy, mva, mvn, mwc, mwe, mwf, mwp, mxb, mxp, mxq, mxt, mya, myk, myu, myw, myy, mzz, nab, naf, nak, nas, nay, nbq, nca, nch, ncj, ncl, ncu, ndg, ndj, nfa, ngp, ngu, nhe, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nii, nin, nko, nld, nlg, nmw, nna, nnq, noa, nop, not, nou, npi, npl, nsn, nss, ntj, ntp, ntu, nuy, nvm, nwi, nya, nys, nyu, obo, okv, omw, ong, ons, ood, opm, ory, ote, otm, otn, otq, ots, pab, pad, pah, pan, pao, pes, pib, pio, pir, piu, pjt, pls, plu, pma, poe, poh, poi, pol, pon, por, poy, ppo, prf, pri, ptp, ptu, pwg, qub, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, reg, rgu, rkb, rmc, rmy, ron, roo, rop, row, rro, ruf, rug, rus, rwo, sab, san, sbe, sbk, sbs, seh, sey, sgb, sgz, shj, shp, sim, sja, sll, smk, snc, snn, snp, snx, sny, som, soq, soy, spa, spl, spm, spp, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, taj, tam, tav, taw, tbc, tbf, tbg, tbl, tbo, tbz, tca, tcs, tcz, tdt, tee, tel, ter, tet, tew, tfr, tgk, tgl, tgo, tgp, tha, thd, tif, tim, tiw, tiy, tke, tku, tlf, tmd, tna, tnc, tnk, tnn, tnp, toc, tod, tof, toj, ton, too, top, tos, tpa, tpi, tpt, tpz, trc, tsw, ttc, tte, tuc, tue, tuf, tuo, tur, tvk, twi, txq, txu, tzj, tzo, ubr, ubu, udu, uig, ukr, uli, ulk, upv, ura, urb, urd, uri, urt, urw, usa, usp, uvh, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbi, wbp, wed, wer, wim, wiu, wiv, wmt, wmw, wnc, wnu, wol, wos, wrk, wro, wrs, wsk, wuv, xav, xbi, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yle, yml, yon, yor, yrb, yre, yss, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zap, zar, zas, zat, zav, zaw, zca, zga, zia, ziw, zlm, zos, zpc, zpl, zpm, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp",
"## Dataset Structure",
"### Data Fields\n\ntranslation \n- languages - an N length list of the languages of the translations, sorted alphabetically\n- translation - an N length list with the translations each corresponding to the language specified in the above field\nfiles\n- lang - an N length list of the languages of the files, in order of input\n- file - an N length list of the filenames from the corpus on github, each corresponding with the lang above\nref - the verse(s) contained in the record, as a list, with each represented with: ''<a three letter book code> <chapter number>:<verse number>'' \n\nlicenses - an N length list of licenses, corresponding to the list of files above\n\ncopyrights - information on copyright holders, corresponding to the list of files above",
"### Usage\n\nThe dataset loading script requires installation of tqdm, ijson, and numpy\n\nSpecify the languages to be paired with a list and ISO 693-3 language codes, such as ''languages = ['eng', 'fra']''.\nBy default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ''pair='single'''. If only the maximum range pairing is desired use ''pair='range''' (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).",
"## Sources\nURL"
] |
b8bbfeb80905e6d66dc06a47ec6e37b502ea6c69 |
# NEREL dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
NEREL dataset (https://doi.org/10.48550/arXiv.2108.13112) is
a Russian dataset for named entity recognition and relation extraction.
NEREL is significantly larger than existing Russian datasets:
to date it contains 56K annotated named entities and 39K annotated relations.
Its important difference from previous datasets is annotation of nested named
entities, as well as relations within nested entities and at the discourse
level. NEREL can facilitate development of novel models that can extract
relations between nested named entities, as well as relations on both sentence
and document levels. NEREL also contains the annotation of events involving
named entities and their roles in the events.
You can see full entity types list in a subset "ent_types"
and full list of relation types in a subset "rel_types".
## Dataset Structure
There are three "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({features: ['type', 'link']})
) where "link" is a knowledge base name used in entity linking task.
Using
`load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']`
you can download list of entity types (
Dataset({features: ['type', 'arg1', 'arg2']})
) where "arg1" and "arg2" are lists of entity types that can take part in such
"type" of relation. \<ENTITY> stands for any type.
Using
`load_dataset('MalakhovIlya/NEREL', 'data')` or `load_dataset('MalakhovIlya/NEREL')`
you can download the data itself,
DatasetDict with 3 splits: "train", "test" and "dev".
Each of them contains text document with annotated entities, relations and
links.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
"links" are used in entity linking task (see https://en.wikipedia.org/wiki/Entity_linking)
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
Each link is represented by a string of the following format:
`"<id>\tReference <ent_id> <link>\t<text>"`, where
`<id>` is a link id,
`<ent_id>` is an entity id,
`<link>` is a reference to knowledge base entity (example: "Wikidata:Q1879675" if link exists, else "Wikidata:NULL"),
`<text>` is a name of entity in knowledge base if link exists, else empty string.
## Citation Information
@article{loukachevitch2021nerel,
title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events},
author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena},
journal={arXiv preprint arXiv:2108.13112},
year={2021}
}
| iluvvatar/NEREL | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-04-07T08:03:51+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "NEREL"} | 2023-03-30T12:37:20+00:00 | [] | [
"ru"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #region-us
|
# NEREL dataset
## Table of Contents
- Dataset Description
- Dataset Structure
- Citation Information
- Contacts
## Dataset Description
NEREL dataset (URL is
a Russian dataset for named entity recognition and relation extraction.
NEREL is significantly larger than existing Russian datasets:
to date it contains 56K annotated named entities and 39K annotated relations.
Its important difference from previous datasets is annotation of nested named
entities, as well as relations within nested entities and at the discourse
level. NEREL can facilitate development of novel models that can extract
relations between nested named entities, as well as relations on both sentence
and document levels. NEREL also contains the annotation of events involving
named entities and their roles in the events.
You can see full entity types list in a subset "ent_types"
and full list of relation types in a subset "rel_types".
## Dataset Structure
There are three "configs" or "subsets" of the dataset.
Using
'load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']'
you can download list of entity types (
Dataset({features: ['type', 'link']})
) where "link" is a knowledge base name used in entity linking task.
Using
'load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']'
you can download list of entity types (
Dataset({features: ['type', 'arg1', 'arg2']})
) where "arg1" and "arg2" are lists of entity types that can take part in such
"type" of relation. \<ENTITY> stands for any type.
Using
'load_dataset('MalakhovIlya/NEREL', 'data')' or 'load_dataset('MalakhovIlya/NEREL')'
you can download the data itself,
DatasetDict with 3 splits: "train", "test" and "dev".
Each of them contains text document with annotated entities, relations and
links.
"entities" are used in named-entity recognition task (see URL
"relations" are used in relationship extraction task (see URL
"links" are used in entity linking task (see URL
Each entity is represented by a string of the following format:
'"<id>\t<type> <start> <stop>\t<text>"', where
'<id>' is an entity id,
'<type>' is one of entity types,
'<start>' is a position of the first symbol of entity in text,
'<stop>' is the last symbol position in text +1.
Each relation is represented by a string of the following format:
'"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"', where
'<id>' is a relation id,
'<arg1_id>' and '<arg2_id>' are entity ids.
Each link is represented by a string of the following format:
'"<id>\tReference <ent_id> <link>\t<text>"', where
'<id>' is a link id,
'<ent_id>' is an entity id,
'<link>' is a reference to knowledge base entity (example: "Wikidata:Q1879675" if link exists, else "Wikidata:NULL"),
'<text>' is a name of entity in knowledge base if link exists, else empty string.
@article{loukachevitch2021nerel,
title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events},
author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena},
journal={arXiv preprint arXiv:2108.13112},
year={2021}
}
| [
"# NEREL dataset",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Citation Information\n- Contacts",
"## Dataset Description\nNEREL dataset (URL is\na Russian dataset for named entity recognition and relation extraction.\nNEREL is significantly larger than existing Russian datasets:\nto date it contains 56K annotated named entities and 39K annotated relations.\nIts important difference from previous datasets is annotation of nested named\nentities, as well as relations within nested entities and at the discourse\nlevel. NEREL can facilitate development of novel models that can extract\nrelations between nested named entities, as well as relations on both sentence\nand document levels. NEREL also contains the annotation of events involving\nnamed entities and their roles in the events.\n\nYou can see full entity types list in a subset \"ent_types\"\nand full list of relation types in a subset \"rel_types\".",
"## Dataset Structure\nThere are three \"configs\" or \"subsets\" of the dataset.\n\nUsing \n'load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']' \nyou can download list of entity types (\nDataset({features: ['type', 'link']})\n) where \"link\" is a knowledge base name used in entity linking task.\n \nUsing \n'load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']' \nyou can download list of entity types (\nDataset({features: ['type', 'arg1', 'arg2']})\n) where \"arg1\" and \"arg2\" are lists of entity types that can take part in such\n\"type\" of relation. \\<ENTITY> stands for any type.\n\nUsing \n'load_dataset('MalakhovIlya/NEREL', 'data')' or 'load_dataset('MalakhovIlya/NEREL')'\nyou can download the data itself, \nDatasetDict with 3 splits: \"train\", \"test\" and \"dev\".\nEach of them contains text document with annotated entities, relations and\nlinks.\n\n\"entities\" are used in named-entity recognition task (see URL \n\"relations\" are used in relationship extraction task (see URL \n\"links\" are used in entity linking task (see URL\n\nEach entity is represented by a string of the following format: \n'\"<id>\\t<type> <start> <stop>\\t<text>\"', where \n'<id>' is an entity id, \n'<type>' is one of entity types, \n'<start>' is a position of the first symbol of entity in text, \n'<stop>' is the last symbol position in text +1.\n\nEach relation is represented by a string of the following format: \n'\"<id>\\t<type> Arg1:<arg1_id> Arg2:<arg2_id>\"', where \n'<id>' is a relation id, \n'<arg1_id>' and '<arg2_id>' are entity ids.\n\nEach link is represented by a string of the following format: \n'\"<id>\\tReference <ent_id> <link>\\t<text>\"', where \n'<id>' is a link id, \n'<ent_id>' is an entity id, \n'<link>' is a reference to knowledge base entity (example: \"Wikidata:Q1879675\" if link exists, else \"Wikidata:NULL\"), \n'<text>' is a name of entity in knowledge base if link exists, else empty string.\n\n\n@article{loukachevitch2021nerel, \n title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events}, \n author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena}, \n journal={arXiv preprint arXiv:2108.13112}, \n year={2021} \n}"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #region-us \n",
"# NEREL dataset",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Citation Information\n- Contacts",
"## Dataset Description\nNEREL dataset (URL is\na Russian dataset for named entity recognition and relation extraction.\nNEREL is significantly larger than existing Russian datasets:\nto date it contains 56K annotated named entities and 39K annotated relations.\nIts important difference from previous datasets is annotation of nested named\nentities, as well as relations within nested entities and at the discourse\nlevel. NEREL can facilitate development of novel models that can extract\nrelations between nested named entities, as well as relations on both sentence\nand document levels. NEREL also contains the annotation of events involving\nnamed entities and their roles in the events.\n\nYou can see full entity types list in a subset \"ent_types\"\nand full list of relation types in a subset \"rel_types\".",
"## Dataset Structure\nThere are three \"configs\" or \"subsets\" of the dataset.\n\nUsing \n'load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']' \nyou can download list of entity types (\nDataset({features: ['type', 'link']})\n) where \"link\" is a knowledge base name used in entity linking task.\n \nUsing \n'load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']' \nyou can download list of entity types (\nDataset({features: ['type', 'arg1', 'arg2']})\n) where \"arg1\" and \"arg2\" are lists of entity types that can take part in such\n\"type\" of relation. \\<ENTITY> stands for any type.\n\nUsing \n'load_dataset('MalakhovIlya/NEREL', 'data')' or 'load_dataset('MalakhovIlya/NEREL')'\nyou can download the data itself, \nDatasetDict with 3 splits: \"train\", \"test\" and \"dev\".\nEach of them contains text document with annotated entities, relations and\nlinks.\n\n\"entities\" are used in named-entity recognition task (see URL \n\"relations\" are used in relationship extraction task (see URL \n\"links\" are used in entity linking task (see URL\n\nEach entity is represented by a string of the following format: \n'\"<id>\\t<type> <start> <stop>\\t<text>\"', where \n'<id>' is an entity id, \n'<type>' is one of entity types, \n'<start>' is a position of the first symbol of entity in text, \n'<stop>' is the last symbol position in text +1.\n\nEach relation is represented by a string of the following format: \n'\"<id>\\t<type> Arg1:<arg1_id> Arg2:<arg2_id>\"', where \n'<id>' is a relation id, \n'<arg1_id>' and '<arg2_id>' are entity ids.\n\nEach link is represented by a string of the following format: \n'\"<id>\\tReference <ent_id> <link>\\t<text>\"', where \n'<id>' is a link id, \n'<ent_id>' is an entity id, \n'<link>' is a reference to knowledge base entity (example: \"Wikidata:Q1879675\" if link exists, else \"Wikidata:NULL\"), \n'<text>' is a name of entity in knowledge base if link exists, else empty string.\n\n\n@article{loukachevitch2021nerel, \n title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events}, \n author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena}, \n journal={arXiv preprint arXiv:2108.13112}, \n year={2021} \n}"
] |
7fb2f514ea683c5282dfec0a9672ece8de90ac50 |
This file contains news texts (sentences) belonging to 5 different news categories (political, business, technology, sports and Entertainment). The original dataset was released by Nisansa de Silva (*Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*). The original dataset is processed and cleaned of single word texts, English only sentences etc.
If you use this dataset, please cite {*Nisansa de Silva, Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*} and {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} | NLPC-UOM/Sinhala-News-Category-classification | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:si",
"license:mit",
"region:us"
] | 2022-04-07T11:21:01+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["si"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "sinhala-news-category-classification"} | 2022-10-25T09:03:58+00:00 | [] | [
"si"
] | TAGS
#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #language-Sinhala #license-mit #region-us
|
This file contains news texts (sentences) belonging to 5 different news categories (political, business, technology, sports and Entertainment). The original dataset was released by Nisansa de Silva (*Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*). The original dataset is processed and cleaned of single word texts, English only sentences etc.
If you use this dataset, please cite {*Nisansa de Silva, Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*} and {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} | [] | [
"TAGS\n#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #language-Sinhala #license-mit #region-us \n"
] |
ac4d14eeb68efbef95e247542d4432ce674faeb1 |
This dataset contains Sinhala news headlines extracted from 9 news sources (websites) (Sri Lanka Army, Dinamina, GossipLanka, Hiru, ITN, Lankapuwath, NewsLK,
Newsfirst, World Socialist Web Site-Sinhala). This is a processed version of the corpus created by *Sachintha, D., Piyarathna, L., Rajitha, C., and Ranathunga, S. (2021). Exploiting parallel corpora to improve multilingual embedding based document and sentence alignment*. Single word sentences, invalid characters have been removed from the originally extracted corpus and also subsampled to handle class imbalance.
If you use this dataset please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} | NLPC-UOM/Sinhala-News-Source-classification | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:si",
"license:mit",
"region:us"
] | 2022-04-07T11:43:58+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["si"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "sinhala-news-source-classification"} | 2022-10-25T09:04:01+00:00 | [] | [
"si"
] | TAGS
#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #language-Sinhala #license-mit #region-us
|
This dataset contains Sinhala news headlines extracted from 9 news sources (websites) (Sri Lanka Army, Dinamina, GossipLanka, Hiru, ITN, Lankapuwath, NewsLK,
Newsfirst, World Socialist Web Site-Sinhala). This is a processed version of the corpus created by *Sachintha, D., Piyarathna, L., Rajitha, C., and Ranathunga, S. (2021). Exploiting parallel corpora to improve multilingual embedding based document and sentence alignment*. Single word sentences, invalid characters have been removed from the originally extracted corpus and also subsampled to handle class imbalance.
If you use this dataset please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} | [] | [
"TAGS\n#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #language-Sinhala #license-mit #region-us \n"
] |
46d3e24187694e12e7b4ae59b94c80b86ab774d8 |
# Dataset Card for KoBEST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/SKT-LSL/KoBEST_datarepo
- **Paper:**
- **Point of Contact:** https://github.com/SKT-LSL/KoBEST_datarepo/issues
### Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
### Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### KB-BoolQ
An example of a data point looks as follows.
```
{'paragraph': '두아 리파(Dua Lipa, 1995년 8월 22일 ~ )는 잉글랜드의 싱어송라이터, 모델이다. BBC 사운드 오브 2016 명단에 노미닛되었다. 싱글 "Be the One"가 영국 싱글 차트 9위까지 오르는 등 성과를 보여주었다.',
'question': '두아 리파는 영국인인가?',
'label': 1}
```
#### KB-COPA
An example of a data point looks as follows.
```
{'premise': '물을 오래 끓였다.',
'question': '결과',
'alternative_1': '물의 양이 늘어났다.',
'alternative_2': '물의 양이 줄어들었다.',
'label': 1}
```
#### KB-WiC
An example of a data point looks as follows.
```
{'word': '양분',
'context_1': '토양에 [양분]이 풍부하여 나무가 잘 자란다. ',
'context_2': '태아는 모체로부터 [양분]과 산소를 공급받게 된다.',
'label': 1}
```
#### KB-HellaSwag
An example of a data point looks as follows.
```
{'context': '모자를 쓴 투수가 타자에게 온 힘을 다해 공을 던진다. 공이 타자에게 빠른 속도로 다가온다. 타자가 공을 배트로 친다. 배트에서 깡 소리가 난다. 공이 하늘 위로 날아간다.',
'ending_1': '외야수가 떨어지는 공을 글러브로 잡는다.',
'ending_2': '외야수가 공이 떨어질 위치에 자리를 잡는다.',
'ending_3': '심판이 아웃을 외친다.',
'ending_4': '외야수가 공을 따라 뛰기 시작한다.',
'label': 3}
```
#### KB-SentiNeg
An example of a data point looks as follows.
```
{'sentence': '택배사 정말 마음에 듬',
'label': 1}
```
### Data Fields
### KB-BoolQ
+ `paragraph`: a `string` feature
+ `question`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-COPA
+ `premise`: a `string` feature
+ `question`: a `string` feature
+ `alternative_1`: a `string` feature
+ `alternative_2`: a `string` feature
+ `label`: an answer candidate label, with possible values `alternative_1`(0) and `alternative_2`(1)
### KB-WiC
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-HellaSwag
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-SentiNeg
+ `sentence`: a `string` feature
+ `label`: a classification label, with possible values `Negative`(0) and `Positive`(1)
### Data Splits
#### KB-BoolQ
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-COPA
+ train: 3,076
+ dev: 1,000
+ test: 1,000
#### KB-WiC
+ train: 3,318
+ dev: 1,260
+ test: 1,260
#### KB-HellaSwag
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-SentiNeg
+ train: 3,649
+ dev: 400
+ test: 397
+ test_originated: 397 (Corresponding training data where the test set is originated from.)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
```
@misc{https://doi.org/10.48550/arxiv.2204.04541,
doi = {10.48550/ARXIV.2204.04541},
url = {https://arxiv.org/abs/2204.04541},
author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
publisher = {arXiv},
year = {2022},
}
```
[More Information Needed]
### Contributions
Thanks to [@MJ-Jang](https://github.com/MJ-Jang) for adding this dataset. | skt/kobest_v1 | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2204.04541",
"region:us"
] | 2022-04-07T12:54:23+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ko"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "pretty_name": "KoBEST"} | 2022-08-22T08:00:17+00:00 | [
"2204.04541"
] | [
"ko"
] | TAGS
#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Korean #license-cc-by-sa-4.0 #arxiv-2204.04541 #region-us
|
# Dataset Card for KoBEST
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Repository: URL
- Paper:
- Point of Contact: URL
### Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
### Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
### Languages
'ko-KR'
## Dataset Structure
### Data Instances
#### KB-BoolQ
An example of a data point looks as follows.
#### KB-COPA
An example of a data point looks as follows.
#### KB-WiC
An example of a data point looks as follows.
#### KB-HellaSwag
An example of a data point looks as follows.
#### KB-SentiNeg
An example of a data point looks as follows.
### Data Fields
### KB-BoolQ
+ 'paragraph': a 'string' feature
+ 'question': a 'string' feature
+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)
### KB-COPA
+ 'premise': a 'string' feature
+ 'question': a 'string' feature
+ 'alternative_1': a 'string' feature
+ 'alternative_2': a 'string' feature
+ 'label': an answer candidate label, with possible values 'alternative_1'(0) and 'alternative_2'(1)
### KB-WiC
+ 'target_word': a 'string' feature
+ 'context_1': a 'string' feature
+ 'context_2': a 'string' feature
+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)
### KB-HellaSwag
+ 'target_word': a 'string' feature
+ 'context_1': a 'string' feature
+ 'context_2': a 'string' feature
+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)
### KB-SentiNeg
+ 'sentence': a 'string' feature
+ 'label': a classification label, with possible values 'Negative'(0) and 'Positive'(1)
### Data Splits
#### KB-BoolQ
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-COPA
+ train: 3,076
+ dev: 1,000
+ test: 1,000
#### KB-WiC
+ train: 3,318
+ dev: 1,260
+ test: 1,260
#### KB-HellaSwag
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-SentiNeg
+ train: 3,649
+ dev: 400
+ test: 397
+ test_originated: 397 (Corresponding training data where the test set is originated from.)
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @MJ-Jang for adding this dataset. | [
"# Dataset Card for KoBEST",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: URL\n- Paper:\n- Point of Contact: URL",
"### Dataset Summary\n\nKoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.",
"### Supported Tasks and Leaderboards\n\nBoolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition",
"### Languages\n\n'ko-KR'",
"## Dataset Structure",
"### Data Instances",
"#### KB-BoolQ\nAn example of a data point looks as follows.",
"#### KB-COPA\nAn example of a data point looks as follows.",
"#### KB-WiC\nAn example of a data point looks as follows.",
"#### KB-HellaSwag\nAn example of a data point looks as follows.",
"#### KB-SentiNeg\nAn example of a data point looks as follows.",
"### Data Fields",
"### KB-BoolQ\n+ 'paragraph': a 'string' feature\n+ 'question': a 'string' feature\n+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)",
"### KB-COPA\n+ 'premise': a 'string' feature\n+ 'question': a 'string' feature\n+ 'alternative_1': a 'string' feature\n+ 'alternative_2': a 'string' feature\n+ 'label': an answer candidate label, with possible values 'alternative_1'(0) and 'alternative_2'(1)",
"### KB-WiC\n+ 'target_word': a 'string' feature\n+ 'context_1': a 'string' feature\n+ 'context_2': a 'string' feature\n+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)",
"### KB-HellaSwag\n+ 'target_word': a 'string' feature\n+ 'context_1': a 'string' feature\n+ 'context_2': a 'string' feature\n+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)",
"### KB-SentiNeg\n+ 'sentence': a 'string' feature\n+ 'label': a classification label, with possible values 'Negative'(0) and 'Positive'(1)",
"### Data Splits",
"#### KB-BoolQ\n\n+ train: 3,665\n+ dev: 700\n+ test: 1,404",
"#### KB-COPA\n\n+ train: 3,076\n+ dev: 1,000\n+ test: 1,000",
"#### KB-WiC\n\n+ train: 3,318\n+ dev: 1,260\n+ test: 1,260",
"#### KB-HellaSwag\n\n+ train: 3,665\n+ dev: 700\n+ test: 1,404",
"#### KB-SentiNeg\n\n+ train: 3,649\n+ dev: 400\n+ test: 397\n+ test_originated: 397 (Corresponding training data where the test set is originated from.)",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @MJ-Jang for adding this dataset."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Korean #license-cc-by-sa-4.0 #arxiv-2204.04541 #region-us \n",
"# Dataset Card for KoBEST",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: URL\n- Paper:\n- Point of Contact: URL",
"### Dataset Summary\n\nKoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.",
"### Supported Tasks and Leaderboards\n\nBoolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition",
"### Languages\n\n'ko-KR'",
"## Dataset Structure",
"### Data Instances",
"#### KB-BoolQ\nAn example of a data point looks as follows.",
"#### KB-COPA\nAn example of a data point looks as follows.",
"#### KB-WiC\nAn example of a data point looks as follows.",
"#### KB-HellaSwag\nAn example of a data point looks as follows.",
"#### KB-SentiNeg\nAn example of a data point looks as follows.",
"### Data Fields",
"### KB-BoolQ\n+ 'paragraph': a 'string' feature\n+ 'question': a 'string' feature\n+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)",
"### KB-COPA\n+ 'premise': a 'string' feature\n+ 'question': a 'string' feature\n+ 'alternative_1': a 'string' feature\n+ 'alternative_2': a 'string' feature\n+ 'label': an answer candidate label, with possible values 'alternative_1'(0) and 'alternative_2'(1)",
"### KB-WiC\n+ 'target_word': a 'string' feature\n+ 'context_1': a 'string' feature\n+ 'context_2': a 'string' feature\n+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)",
"### KB-HellaSwag\n+ 'target_word': a 'string' feature\n+ 'context_1': a 'string' feature\n+ 'context_2': a 'string' feature\n+ 'label': a classification label, with possible values 'False'(0) and 'True'(1)",
"### KB-SentiNeg\n+ 'sentence': a 'string' feature\n+ 'label': a classification label, with possible values 'Negative'(0) and 'Positive'(1)",
"### Data Splits",
"#### KB-BoolQ\n\n+ train: 3,665\n+ dev: 700\n+ test: 1,404",
"#### KB-COPA\n\n+ train: 3,076\n+ dev: 1,000\n+ test: 1,000",
"#### KB-WiC\n\n+ train: 3,318\n+ dev: 1,260\n+ test: 1,260",
"#### KB-HellaSwag\n\n+ train: 3,665\n+ dev: 700\n+ test: 1,404",
"#### KB-SentiNeg\n\n+ train: 3,649\n+ dev: 400\n+ test: 397\n+ test_originated: 397 (Corresponding training data where the test set is originated from.)",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @MJ-Jang for adding this dataset."
] |
1ff93b40a787d18186bfbf3abc594e62ea3f7e37 | # AutoTrain Dataset for project: test-21312
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project test-21312.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"id": 300,
"target": 1,
"Pclass": 1,
"Name": "Baxter, Mrs. James (Helene DeLaudeniere Chaput)",
"Sex": "female",
"Age": 50.0,
"SibSp": 0,
"Parch": 1,
"Ticket": "PC 17558",
"Fare": 247.5208,
"Cabin": "B58 B60",
"Embarked": "C"
},
{
"id": 858,
"target": 1,
"Pclass": 1,
"Name": "Daly, Mr. Peter Denis ",
"Sex": "male",
"Age": 51.0,
"SibSp": 0,
"Parch": 0,
"Ticket": "113055",
"Fare": 26.55,
"Cabin": "E17",
"Embarked": "S"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"id": "Value(dtype='int64', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)",
"Pclass": "Value(dtype='int64', id=None)",
"Name": "Value(dtype='string', id=None)",
"Sex": "Value(dtype='string', id=None)",
"Age": "Value(dtype='float64', id=None)",
"SibSp": "Value(dtype='int64', id=None)",
"Parch": "Value(dtype='int64', id=None)",
"Ticket": "Value(dtype='string', id=None)",
"Fare": "Value(dtype='float64', id=None)",
"Cabin": "Value(dtype='string', id=None)",
"Embarked": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 146 |
| valid | 37 |
| victor/autotrain-data-test-21312 | [
"region:us"
] | 2022-04-07T13:18:31+00:00 | {} | 2022-04-07T13:19:26+00:00 | [] | [] | TAGS
#region-us
| AutoTrain Dataset for project: test-21312
=========================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project test-21312.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
5171fedc217c7bc893fa08f0e1d353a2cf666423 | image-classification
---
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality:
- monolingual
pretty_name: airplanes
size_categories: []
source_datasets: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for airplanes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
three classes of airplanes: drone, UAV, and fighter
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Drone images were taken from:
Wang, Ye, Yueru Chen, Jongmoo Choi, and C-C. Jay Kuo. “Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks.” APSIPA Transactions on Signal and Information Processing 8 (2019).
[mcl-drone-dataset](https://mcl.usc.edu/mcl-drone-dataset/) | johnnydevriese/airplanes | [
"region:us"
] | 2022-04-07T19:34:25+00:00 | {} | 2022-09-16T14:28:53+00:00 | [] | [] | TAGS
#region-us
| image-classification
---
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality:
- monolingual
pretty_name: airplanes
size_categories: []
source_datasets: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for airplanes
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
three classes of airplanes: drone, UAV, and fighter
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Drone images were taken from:
Wang, Ye, Yueru Chen, Jongmoo Choi, and C-C. Jay Kuo. “Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks.” APSIPA Transactions on Signal and Information Processing 8 (2019).
mcl-drone-dataset | [
"# Dataset Card for airplanes",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nthree classes of airplanes: drone, UAV, and fighter",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\n\nDrone images were taken from: \n\nWang, Ye, Yueru Chen, Jongmoo Choi, and C-C. Jay Kuo. “Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks.” APSIPA Transactions on Signal and Information Processing 8 (2019).\n\nmcl-drone-dataset"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for airplanes",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nthree classes of airplanes: drone, UAV, and fighter",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\n\nDrone images were taken from: \n\nWang, Ye, Yueru Chen, Jongmoo Choi, and C-C. Jay Kuo. “Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks.” APSIPA Transactions on Signal and Information Processing 8 (2019).\n\nmcl-drone-dataset"
] |
83551fe521307e2a05274a2150d1d554f898d083 | # GEM Submission
Submission name: ENT
| GEM-submissions/ratishsp__ent__1649421332 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T11:35:32+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "ENT", "tags": ["evaluation", "benchmark"]} | 2022-04-08T11:35:35+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: ENT
| [
"# GEM Submission\n\nSubmission name: ENT"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: ENT"
] |
822ca2e2310fc76c47ac7e02c2316a260f63d83d | # GEM Submission
Submission name: NCP_CC
| GEM-submissions/ratishsp__ncp_cc__1649422112 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T11:48:32+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "NCP_CC", "tags": ["evaluation", "benchmark"]} | 2022-04-08T11:48:34+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: NCP_CC
| [
"# GEM Submission\n\nSubmission name: NCP_CC"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: NCP_CC"
] |
8e91091fcdcf73d0dca08f4e73cd7b1cbf5c7b51 | # GEM Submission
Submission name: ENT
| GEM-submissions/ratishsp__ent__1649422569 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T11:56:09+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "ENT", "tags": ["evaluation", "benchmark"]} | 2022-04-08T11:56:11+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: ENT
| [
"# GEM Submission\n\nSubmission name: ENT"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: ENT"
] |
f6f5797f4852eb1ac0dad141ce7894ed6d71bf8a | # GEM Submission
Submission name: NCP_CC
| GEM-submissions/ratishsp__ncp_cc__1649422863 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T12:01:03+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "NCP_CC", "tags": ["evaluation", "benchmark"]} | 2022-04-08T12:01:05+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: NCP_CC
| [
"# GEM Submission\n\nSubmission name: NCP_CC"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: NCP_CC"
] |
b43970c4be4cff8c5259b043cc78202fa34e2bc3 |
# Dataset Card for pl-corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [UlyssesNER-Br homepage](https://github.com/Convenio-Camara-dos-Deputados/ulyssesner-br-propor)
- **Repository:** [UlyssesNER-Br repository](https://github.com/Convenio-Camara-dos-Deputados/ulyssesner-br-propor)
- **Paper:** [UlyssesNER-Br: A corpus of brazilian legislative documents for named entity recognition. In: Computational Processing of the Portuguese Language](https://link.springer.com/chapter/10.1007/978-3-030-98305-5_1)
- **Point of Contact:** [Hidelberg O. Albuquerque](mailto:[email protected])
### Dataset Summary
PL-corpus is part of the UlyssesNER-Br, a corpus of Brazilian Legislative Documents for NER with quality baselines The presented corpus consists of 150 public bills from Brazilian Chamber of Deputies, manually annotated. Its contains semantic categories and types.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Brazilian Portuguese.
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@InProceedings{ALBUQUERQUE2022,
author="Albuquerque, Hidelberg O.
and Costa, Rosimeire
and Silvestre, Gabriel
and Souza, Ellen
and da Silva, N{\'a}dia F. F.
and Vit{\'o}rio, Douglas
and Moriyama, Gyovana
and Martins, Lucas
and Soezima, Luiza
and Nunes, Augusto
and Siqueira, Felipe
and Tarrega, Jo{\~a}o P.
and Beinotti, Joao V.
and Dias, Marcio
and Silva, Matheus
and Gardini, Miguel
and Silva, Vinicius
and de Carvalho, Andr{\'e} C. P. L. F.
and Oliveira, Adriano L. I.",
title="{UlyssesNER-Br}: A Corpus of Brazilian Legislative Documents for Named Entity Recognition",
booktitle="Computational Processing of the Portuguese Language",
year="2022",
pages="3--14",
}
``` | bergoliveira/pl-corpus | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:pt",
"license:unknown",
"legal",
"legislative",
"region:us"
] | 2022-04-08T14:15:10+00:00 | {"language": ["pt"], "license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "pretty_name": "plcorpus", "tags": ["legal", "legislative"]} | 2023-05-01T13:25:22+00:00 | [] | [
"pt"
] | TAGS
#task_categories-token-classification #size_categories-10K<n<100K #language-Portuguese #license-unknown #legal #legislative #region-us
|
# Dataset Card for pl-corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: UlyssesNER-Br homepage
- Repository: UlyssesNER-Br repository
- Paper: UlyssesNER-Br: A corpus of brazilian legislative documents for named entity recognition. In: Computational Processing of the Portuguese Language
- Point of Contact: Hidelberg O. Albuquerque
### Dataset Summary
PL-corpus is part of the UlyssesNER-Br, a corpus of Brazilian Legislative Documents for NER with quality baselines The presented corpus consists of 150 public bills from Brazilian Chamber of Deputies, manually annotated. Its contains semantic categories and types.
### Supported Tasks and Leaderboards
### Languages
Brazilian Portuguese.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for pl-corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description \n\n- Homepage: UlyssesNER-Br homepage\n- Repository: UlyssesNER-Br repository\n- Paper: UlyssesNER-Br: A corpus of brazilian legislative documents for named entity recognition. In: Computational Processing of the Portuguese Language\n- Point of Contact: Hidelberg O. Albuquerque",
"### Dataset Summary\n\nPL-corpus is part of the UlyssesNER-Br, a corpus of Brazilian Legislative Documents for NER with quality baselines The presented corpus consists of 150 public bills from Brazilian Chamber of Deputies, manually annotated. Its contains semantic categories and types.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nBrazilian Portuguese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#task_categories-token-classification #size_categories-10K<n<100K #language-Portuguese #license-unknown #legal #legislative #region-us \n",
"# Dataset Card for pl-corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description \n\n- Homepage: UlyssesNER-Br homepage\n- Repository: UlyssesNER-Br repository\n- Paper: UlyssesNER-Br: A corpus of brazilian legislative documents for named entity recognition. In: Computational Processing of the Portuguese Language\n- Point of Contact: Hidelberg O. Albuquerque",
"### Dataset Summary\n\nPL-corpus is part of the UlyssesNER-Br, a corpus of Brazilian Legislative Documents for NER with quality baselines The presented corpus consists of 150 public bills from Brazilian Chamber of Deputies, manually annotated. Its contains semantic categories and types.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nBrazilian Portuguese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
67e283fee4cd7cbabbe771d1df88382b043e914c | annotations_creators: []
language_creators: []
languages: []
licenses: []
multilinguality: []
pretty_name: humor_train
size_categories: []
source_datasets: []
task_categories: []
task_ids: [] | lm233/humor_train | [
"region:us"
] | 2022-04-08T17:10:37+00:00 | {} | 2022-04-08T17:13:45+00:00 | [] | [] | TAGS
#region-us
| annotations_creators: []
language_creators: []
languages: []
licenses: []
multilinguality: []
pretty_name: humor_train
size_categories: []
source_datasets: []
task_categories: []
task_ids: [] | [] | [
"TAGS\n#region-us \n"
] |
66cd1dbf5577c653ecb99b385200f08e15e12f30 | # Dataset Card for TopiOCQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/)
- **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa)
- **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768)
- **Point of Contact:** [Vaibhav Adlakha](mailto:[email protected])
### Dataset Summary
TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.
### Languages
The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.
## Additional Information
### Licensing Information
TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@inproceedings{adlakha2022topiocqa,
title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching},
author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva},
journal={Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {468-483},
year = {2022},
month = {04},
year={2022},
issn = {2307-387X},
doi = {10.1162/tacl_a_00471},
url = {https://doi.org/10.1162/tacl\_a\_00471},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf},
}
``` | McGill-NLP/TopiOCQA | [
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"language:en",
"license:cc-by-nc-sa-4.0",
"conversational-question-answering",
"arxiv:2110.00768",
"region:us"
] | 2022-04-08T17:29:53+00:00 | {"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "task_categories": ["text-retrieval", "text-generation"], "task_ids": ["language-modeling", "open-domain-qa"], "pretty_name": "Open-domain Conversational Question Answering with Topic Switching", "tags": ["conversational-question-answering"]} | 2023-09-29T18:37:48+00:00 | [
"2110.00768"
] | [
"en"
] | TAGS
#task_categories-text-retrieval #task_categories-text-generation #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100k #language-English #license-cc-by-nc-sa-4.0 #conversational-question-answering #arxiv-2110.00768 #region-us
| # Dataset Card for TopiOCQA
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: TopiOCQA homepage
- Repository: TopiOCQA Github
- Paper: Open-domain Conversational Question Answering with Topic Switching
- Point of Contact: Vaibhav Adlakha
### Dataset Summary
TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.
### Languages
The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.
## Additional Information
### Licensing Information
TopiOCQA is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
| [
"# Dataset Card for TopiOCQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: TopiOCQA homepage\n- Repository: TopiOCQA Github\n- Paper: Open-domain Conversational Question Answering with Topic Switching\n- Point of Contact: Vaibhav Adlakha",
"### Dataset Summary\n\nTopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.",
"### Languages\n\nThe language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.",
"## Additional Information",
"### Licensing Information\n\nTopiOCQA is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] | [
"TAGS\n#task_categories-text-retrieval #task_categories-text-generation #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100k #language-English #license-cc-by-nc-sa-4.0 #conversational-question-answering #arxiv-2110.00768 #region-us \n",
"# Dataset Card for TopiOCQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: TopiOCQA homepage\n- Repository: TopiOCQA Github\n- Paper: Open-domain Conversational Question Answering with Topic Switching\n- Point of Contact: Vaibhav Adlakha",
"### Dataset Summary\n\nTopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.",
"### Languages\n\nThe language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.",
"## Additional Information",
"### Licensing Information\n\nTopiOCQA is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] |
b14fd6edb25ad7646d25599565008cadc013f952 |
# Dataset Card for [Smithsonian Butterflies]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
### Dataset Summary
High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled
### Supported Tasks and Leaderboards
Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.
### Languages
English
## Dataset Structure
### Data Instances
# Example data
```
{'image_url': 'https://ids.si.edu/ids/deliveryService?id=ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'image_alt': 'view Aholibah Underwing digital asset number 1',
'id': 'ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'name': 'Aholibah Underwing',
'scientific_name': 'Catocala aholibah',
'gender': None,
'taxonomy': 'Animalia, Arthropoda, Hexapoda, Insecta, Lepidoptera, Noctuidae, Catocalinae',
'region': None,
'locality': None,
'date': None,
'usnm_no': 'EO400317-DSP',
'guid': 'http://n2t.net/ark:/65665/39b506292-715f-45a7-8511-b49bb087c7de',
'edan_url': 'edanmdm:nmnheducation_10866595',
'source': 'Smithsonian Education and Outreach collections',
'stage': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2000x1328 at 0x7F57D0504DC0>,
'image_hash': '27a5fe92f72f8b116d3b7d65bac84958',
'sim_score': 0.8440760970115662}
```
### Data Fields
sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc)
### Data Splits
No specific split exists.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Doesn't include all butterfly species ## Additional Information
### Dataset Curators
Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections
### Licensing Information
Only results marked: CC0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. | ceyda/smithsonian_butterflies | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-04-08T23:38:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "Smithsonian Butterflies"} | 2022-07-13T08:32:27+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for [Smithsonian Butterflies]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections here
### Dataset Summary
High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled
### Supported Tasks and Leaderboards
Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.
### Languages
English
## Dataset Structure
### Data Instances
# Example data
### Data Fields
sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc)
### Data Splits
No specific split exists.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online here
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Doesn't include all butterfly species ## Additional Information
### Dataset Curators
Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections
### Licensing Information
Only results marked: CC0
### Contributions
Thanks to @cceyda for adding this dataset. | [
"# Dataset Card for [Smithsonian Butterflies]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Smithsonian \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections here",
"### Dataset Summary\n\nHigh-res images from Smithsonian \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections. Crawled",
"### Supported Tasks and Leaderboards\n\nIncludes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"# Example data",
"### Data Fields\n\nsim-score indicates clip score for \"pretty butterfly\". This is to eliminate non-butterfly images(just id card images etc)",
"### Data Splits\n\nNo specific split exists.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n\nCrawled from \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections found online here",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\nDoesn't include all butterfly species ## Additional Information",
"### Dataset Curators\n\nSmithsonian \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections",
"### Licensing Information\n\nOnly results marked: CC0",
"### Contributions\n\nThanks to @cceyda for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for [Smithsonian Butterflies]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Smithsonian \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections here",
"### Dataset Summary\n\nHigh-res images from Smithsonian \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections. Crawled",
"### Supported Tasks and Leaderboards\n\nIncludes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"# Example data",
"### Data Fields\n\nsim-score indicates clip score for \"pretty butterfly\". This is to eliminate non-butterfly images(just id card images etc)",
"### Data Splits\n\nNo specific split exists.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n\nCrawled from \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections found online here",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\nDoesn't include all butterfly species ## Additional Information",
"### Dataset Curators\n\nSmithsonian \"Education and Outreach\" & \"NMNH - Entomology Dept.\" collections",
"### Licensing Information\n\nOnly results marked: CC0",
"### Contributions\n\nThanks to @cceyda for adding this dataset."
] |
6b37397565bdbd6ede10e362e6a1be4c62083bb3 | Dataset Summary
- Natural Language Processing with Disaster Tweets: https://www.kaggle.com/competitions/nlp-getting-started/data
- This particular challenge is perfect for data scientists looking to get started with Natural Language Processing. The competition dataset is not too big, and even if you don’t have much personal computing power, you can do all of the work in our free, no-setup, Jupyter Notebooks environment called Kaggle Notebooks.
Columns
- id - a unique identifier for each tweet
- text - the text of the tweet
- location - the location the tweet was sent from (may be blank)
- keyword - a particular keyword from the tweet (may be blank)
- target - in train.csv only, this denotes whether a tweet is about a real disaster (1) or not (0)
| gdwangh/kaggle-nlp-getting-start | [
"region:us"
] | 2022-04-09T07:03:46+00:00 | {} | 2022-04-09T07:13:03+00:00 | [] | [] | TAGS
#region-us
| Dataset Summary
- Natural Language Processing with Disaster Tweets: URL
- This particular challenge is perfect for data scientists looking to get started with Natural Language Processing. The competition dataset is not too big, and even if you don’t have much personal computing power, you can do all of the work in our free, no-setup, Jupyter Notebooks environment called Kaggle Notebooks.
Columns
- id - a unique identifier for each tweet
- text - the text of the tweet
- location - the location the tweet was sent from (may be blank)
- keyword - a particular keyword from the tweet (may be blank)
- target - in URL only, this denotes whether a tweet is about a real disaster (1) or not (0)
| [] | [
"TAGS\n#region-us \n"
] |
5ce8dc4c178d59d0fcb8f3e580f93fa95ed57901 | # Data Summary
This dataset contains images of Moroccan Chebakia (Traditional Ramadan Sweets).
# Data Source
All of the images were web scrapped using a google image search API.
### Contributions
[`Ilyas Moutawwakil`](https://huggingface.co/IlyasMoutawwakil) added this dataset to the hub. | huggan/chebakia | [
"region:us"
] | 2022-04-09T15:37:31+00:00 | {} | 2022-05-27T10:53:19+00:00 | [] | [] | TAGS
#region-us
| # Data Summary
This dataset contains images of Moroccan Chebakia (Traditional Ramadan Sweets).
# Data Source
All of the images were web scrapped using a google image search API.
### Contributions
'Ilyas Moutawwakil' added this dataset to the hub. | [
"# Data Summary\nThis dataset contains images of Moroccan Chebakia (Traditional Ramadan Sweets).",
"# Data Source\nAll of the images were web scrapped using a google image search API.",
"### Contributions\n'Ilyas Moutawwakil' added this dataset to the hub."
] | [
"TAGS\n#region-us \n",
"# Data Summary\nThis dataset contains images of Moroccan Chebakia (Traditional Ramadan Sweets).",
"# Data Source\nAll of the images were web scrapped using a google image search API.",
"### Contributions\n'Ilyas Moutawwakil' added this dataset to the hub."
] |
cf40283692122fe32d2c1d009f5b1a674be473ad | #flowersdataset #segmentation #VGG
# Dataset Card for Flowers Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Official VGG'S README.md](#official-vggs-README.md)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/flowers/17/index.html
- **Repository:** https://huggingface.co/datasets/Guldeniz/flower_dataset
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
VGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.
You can find the split files in the link, as a mat file.
### Official VGG's README.md
17 Flower Category Database
----------------------------------------------
This set contains images of flowers belonging to 17 different categories.
The images were acquired by searching the web and taking pictures. There are
80 images for each category.
The database was used in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
The datasplits used in this paper are specified in datasplits.mat
There are 3 separate splits. The results in the paper are averaged over the 3 splits.
Each split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)
and a testfile (tst1, tst2 or tst3).
Segmentation Ground Truth
------------------------------------------------
The ground truth is given for a subset of the images from 13 different
categories.
More details can be found in:
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz).
The ground truth file also contains the file imlist.mat, which indicated
which images in the original database that have been anotated.
Distance matrices
-----------------------------------------------
We provide two set of distance matrices:
1. distancematrices17gcfeat06.mat
- Distance matrices using the same features and segmentation as detailed in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
2. distancematrices17itfeat08.mat
- Distance matrices using the same features as described in:
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback08.{pdf,ps.gz}.
and the iterative segmenation scheme detailed in
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz). | Guldeniz/flower_dataset | [
"region:us"
] | 2022-04-09T19:36:46+00:00 | {} | 2022-04-09T19:52:59+00:00 | [] | [] | TAGS
#region-us
| #flowersdataset #segmentation #VGG
# Dataset Card for Flowers Dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Official VGG'S URL
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
VGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.
You can find the split files in the link, as a mat file.
### Official VGG's URL
17 Flower Category Database
----------------------------------------------
This set contains images of flowers belonging to 17 different categories.
The images were acquired by searching the web and taking pictures. There are
80 images for each category.
The database was used in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006)
URL
The datasplits used in this paper are specified in URL
There are 3 separate splits. The results in the paper are averaged over the 3 splits.
Each split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)
and a testfile (tst1, tst2 or tst3).
Segmentation Ground Truth
------------------------------------------------
The ground truth is given for a subset of the images from 13 different
categories.
More details can be found in:
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:URL
The ground truth file also contains the file URL, which indicated
which images in the original database that have been anotated.
Distance matrices
-----------------------------------------------
We provide two set of distance matrices:
1. URL
- Distance matrices using the same features and segmentation as detailed in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)
URL
2. URL
- Distance matrices using the same features as described in:
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)
URL
and the iterative segmenation scheme detailed in
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:URL | [
"# Dataset Card for Flowers Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Official VGG'S URL",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nVGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.\nYou can find the split files in the link, as a mat file.",
"### Official VGG's URL\n\n17 Flower Category Database\n----------------------------------------------\nThis set contains images of flowers belonging to 17 different categories. \nThe images were acquired by searching the web and taking pictures. There are\n80 images for each category. \n\nThe database was used in:\n\nNilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.\nProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006) \nURL\n\nThe datasplits used in this paper are specified in URL\n\nThere are 3 separate splits. The results in the paper are averaged over the 3 splits.\nEach split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)\nand a testfile (tst1, tst2 or tst3). \n\nSegmentation Ground Truth\n------------------------------------------------\nThe ground truth is given for a subset of the images from 13 different\ncategories. \n\nMore details can be found in:\n\nNilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.\nProceedings of the British Machine Vision Conference (2007)\nhttp:URL\n\nThe ground truth file also contains the file URL, which indicated\nwhich images in the original database that have been anotated.\n\nDistance matrices\n-----------------------------------------------\n\nWe provide two set of distance matrices:\n\n1. URL \n- Distance matrices using the same features and segmentation as detailed in:\n Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.\n Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)\n URL\n\n2. URL\n- Distance matrices using the same features as described in: \n Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.\n Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)\n URL\n and the iterative segmenation scheme detailed in \n Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.\n Proceedings of the British Machine Vision Conference (2007)\n http:URL"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Flowers Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Official VGG'S URL",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nVGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.\nYou can find the split files in the link, as a mat file.",
"### Official VGG's URL\n\n17 Flower Category Database\n----------------------------------------------\nThis set contains images of flowers belonging to 17 different categories. \nThe images were acquired by searching the web and taking pictures. There are\n80 images for each category. \n\nThe database was used in:\n\nNilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.\nProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006) \nURL\n\nThe datasplits used in this paper are specified in URL\n\nThere are 3 separate splits. The results in the paper are averaged over the 3 splits.\nEach split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)\nand a testfile (tst1, tst2 or tst3). \n\nSegmentation Ground Truth\n------------------------------------------------\nThe ground truth is given for a subset of the images from 13 different\ncategories. \n\nMore details can be found in:\n\nNilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.\nProceedings of the British Machine Vision Conference (2007)\nhttp:URL\n\nThe ground truth file also contains the file URL, which indicated\nwhich images in the original database that have been anotated.\n\nDistance matrices\n-----------------------------------------------\n\nWe provide two set of distance matrices:\n\n1. URL \n- Distance matrices using the same features and segmentation as detailed in:\n Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.\n Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)\n URL\n\n2. URL\n- Distance matrices using the same features as described in: \n Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.\n Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)\n URL\n and the iterative segmenation scheme detailed in \n Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.\n Proceedings of the British Machine Vision Conference (2007)\n http:URL"
] |
c8356096c3ce93ad76030b135e33f4ccd099816e |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/dooggies).
Model is available [here](https://huggingface.co/huggingnft/dooggies).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/dooggies")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/dooggies | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-09T19:54:53+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/dooggies"]} | 2022-04-16T16:59:05+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptoadz-by-gremplin).
Model is available [here](https://huggingface.co/huggingnft/cryptoadz-by-gremplin).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptoadz-by-gremplin")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/cryptoadz-by-gremplin | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:20:00+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cryptoadz-by-gremplin"]} | 2022-04-16T16:59:06+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cyberkongz).
Model is available [here](https://huggingface.co/huggingnft/cyberkongz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cyberkongz")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/cyberkongz | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:33:51+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cyberkongz"]} | 2022-04-16T16:59:06+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/mini-mutants).
Model is available [here](https://huggingface.co/huggingnft/mini-mutants).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/mini-mutants")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/mini-mutants | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:42:22+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/mini-mutants"]} | 2022-04-16T16:59:06+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/theshiboshis).
Model is available [here](https://huggingface.co/huggingnft/theshiboshis).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/theshiboshis")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/theshiboshis | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:48:07+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/theshiboshis"]} | 2022-04-16T16:59:06+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptopunks).
Model is available [here](https://huggingface.co/huggingnft/cryptopunks).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptopunks")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/cryptopunks | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:52:12+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cryptopunks"]} | 2022-04-16T16:59:07+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/nftrex).
Model is available [here](https://huggingface.co/huggingnft/nftrex).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/nftrex")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/nftrex | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:55:12+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/nftrex"]} | 2022-04-16T16:59:07+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/etherbears).
Model is available [here](https://huggingface.co/huggingnft/etherbears).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/etherbears")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/etherbears | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:57:17+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/etherbears"]} | 2022-04-16T16:59:07+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/alpacadabraz).
Model is available [here](https://huggingface.co/huggingnft/alpacadabraz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/alpacadabraz")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/alpacadabraz | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T08:01:03+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/alpacadabraz"]} | 2022-04-16T16:59:07+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/trippytoadznft).
Model is available [here](https://huggingface.co/huggingnft/trippytoadznft).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/trippytoadznft")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/trippytoadznft | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T08:07:49+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/trippytoadznft"]} | 2022-04-16T16:59:07+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/boredapeyachtclub).
Model is available [here](https://huggingface.co/huggingnft/boredapeyachtclub).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/boredapeyachtclub")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/boredapeyachtclub | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T08:14:53+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/boredapeyachtclub"]} | 2022-04-16T16:59:08+00:00 | [] | [] | TAGS
#license-mit #huggingnft #nft #huggan #gan #image #images #region-us
|
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- About
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Point of Contact:
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
### Supported Tasks and Leaderboards
## How to use
How to load this dataset directly with the datasets library:
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- 'image': an 'image' feature.
- 'id': an 'int' feature.
- 'token_metadata': a 'str' feature.
- 'image_original_url': a 'str' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
## About
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\nFor more details, visit the project repository.\n\n\n\nFor more details, visit the project repository.\n\n
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
RuREBus dataset (https://github.com/dialogue-evaluation/RuREBus) is
a Russian dataset for named entity recognition and relation extraction.
## Dataset Structure
There are two subsets of the dataset.
Using
`load_dataset('MalakhovIlya/RuREBus')`
you can download annotated data (DatasetDict) for named entity recognition task and
relation extraction tasks.
This subset consists of two splits: "train" and "test".
Using
`load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']`
you can download (Dataset) large corpus (~3gb) raw texts of the same subject
area, but without any annotations.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
## Citation Information
@inproceedings{rurebus,
Address = {Moscow, Russia},
Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},
Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},
Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},
Year = {2020}
}
| iluvvatar/RuREBus | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-04-10T08:52:30+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "RuREBus"} | 2023-03-30T12:37:32+00:00 | [] | [
"ru"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #region-us
|
# RuREBus dataset
## Table of Contents
- Dataset Description
- Dataset Structure
- Citation Information
- Contacts
## Dataset Description
RuREBus dataset (URL is
a Russian dataset for named entity recognition and relation extraction.
## Dataset Structure
There are two subsets of the dataset.
Using
'load_dataset('MalakhovIlya/RuREBus')'
you can download annotated data (DatasetDict) for named entity recognition task and
relation extraction tasks.
This subset consists of two splits: "train" and "test".
Using
'load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']'
you can download (Dataset) large corpus (~3gb) raw texts of the same subject
area, but without any annotations.
"entities" are used in named-entity recognition task (see URL
"relations" are used in relationship extraction task (see URL
Each entity is represented by a string of the following format:
'"<id>\t<type> <start> <stop>\t<text>"', where
'<id>' is an entity id,
'<type>' is one of entity types,
'<start>' is a position of the first symbol of entity in text,
'<stop>' is the last symbol position in text +1.
Each relation is represented by a string of the following format:
'"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"', where
'<id>' is a relation id,
'<arg1_id>' and '<arg2_id>' are entity ids.
@inproceedings{rurebus,
Address = {Moscow, Russia},
Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},
Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},
Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},
Year = {2020}
}
| [
"# RuREBus dataset",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Citation Information\n- Contacts",
"## Dataset Description\nRuREBus dataset (URL is\na Russian dataset for named entity recognition and relation extraction.",
"## Dataset Structure\nThere are two subsets of the dataset.\n\nUsing \n'load_dataset('MalakhovIlya/RuREBus')' \nyou can download annotated data (DatasetDict) for named entity recognition task and\nrelation extraction tasks. \nThis subset consists of two splits: \"train\" and \"test\".\n \nUsing \n'load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']' \nyou can download (Dataset) large corpus (~3gb) raw texts of the same subject\narea, but without any annotations.\n\n\"entities\" are used in named-entity recognition task (see URL \n\"relations\" are used in relationship extraction task (see URL\n\nEach entity is represented by a string of the following format: \n'\"<id>\\t<type> <start> <stop>\\t<text>\"', where \n'<id>' is an entity id, \n'<type>' is one of entity types, \n'<start>' is a position of the first symbol of entity in text, \n'<stop>' is the last symbol position in text +1.\n\nEach relation is represented by a string of the following format: \n'\"<id>\\t<type> Arg1:<arg1_id> Arg2:<arg2_id>\"', where \n'<id>' is a relation id, \n'<arg1_id>' and '<arg2_id>' are entity ids.\n\n\n@inproceedings{rurebus,\n Address = {Moscow, Russia},\n Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},\n Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},\n Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},\n Year = {2020}\n}"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #region-us \n",
"# RuREBus dataset",
"## Table of Contents\n- Dataset Description\n- Dataset Structure\n- Citation Information\n- Contacts",
"## Dataset Description\nRuREBus dataset (URL is\na Russian dataset for named entity recognition and relation extraction.",
"## Dataset Structure\nThere are two subsets of the dataset.\n\nUsing \n'load_dataset('MalakhovIlya/RuREBus')' \nyou can download annotated data (DatasetDict) for named entity recognition task and\nrelation extraction tasks. \nThis subset consists of two splits: \"train\" and \"test\".\n \nUsing \n'load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']' \nyou can download (Dataset) large corpus (~3gb) raw texts of the same subject\narea, but without any annotations.\n\n\"entities\" are used in named-entity recognition task (see URL \n\"relations\" are used in relationship extraction task (see URL\n\nEach entity is represented by a string of the following format: \n'\"<id>\\t<type> <start> <stop>\\t<text>\"', where \n'<id>' is an entity id, \n'<type>' is one of entity types, \n'<start>' is a position of the first symbol of entity in text, \n'<stop>' is the last symbol position in text +1.\n\nEach relation is represented by a string of the following format: \n'\"<id>\\t<type> Arg1:<arg1_id> Arg2:<arg2_id>\"', where \n'<id>' is a relation id, \n'<arg1_id>' and '<arg2_id>' are entity ids.\n\n\n@inproceedings{rurebus,\n Address = {Moscow, Russia},\n Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},\n Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},\n Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},\n Year = {2020}\n}"
] |
0c4d3efa8324ce171b8e8393b713786f64c63612 | SCIERC (Luan et al., 2018) via "Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks" (Gururangan et al., 2020) reuploaded because of error encountered when trying to load zj88zj/SCIERC with the huggingfaces/datasets library. | nsusemiehl/SciERC | [
"region:us"
] | 2022-04-10T15:51:23+00:00 | {} | 2022-04-10T15:56:55+00:00 | [] | [] | TAGS
#region-us
| SCIERC (Luan et al., 2018) via "Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks" (Gururangan et al., 2020) reuploaded because of error encountered when trying to load zj88zj/SCIERC with the huggingfaces/datasets library. | [] | [
"TAGS\n#region-us \n"
] |
6cfe8e5afe107823c07b64d48e333b9b85ae332b | # SKM-TEA Sample Data
This dataset consists of a subset of scans from the [SKM-TEA dataset](https://arxiv.org/abs/2203.06823). It can be used to build tutorials / demos with the SKM-TEA dataset.
To access to the full dataset, please follow instructions on [Github](https://github.com/StanfordMIMI/skm-tea/blob/main/DATASET.md).
**NOTE**: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.
## Details
This mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are [lzf-compressed](http://www.h5py.org/lzf/) to reduce size while maximizing speed for decompression.
## License
By using this dataset, you agree to the [Stanford University Dataset Research Use Agreement](https://stanfordaimi.azurewebsites.net/datasets/4aaeafb9-c6e6-4e3c-9188-3aaaf0e0a9e7).
## Reference
If you use this dataset, please reference the SKM-TEA paper:
```
@inproceedings{
desai2021skmtea,
title={{SKM}-{TEA}: A Dataset for Accelerated {MRI} Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation},
author={Arjun D Desai and Andrew M Schmidt and Elka B Rubin and Christopher Michael Sandino and Marianne Susan Black and Valentina Mazzoli and Kathryn J Stevens and Robert Boutin and Christopher Re and Garry E Gold and Brian Hargreaves and Akshay Chaudhari},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=YDMFgD_qJuA}
}
```
| arjundd/skm-tea-mini | [
"language:en",
"license:other",
"mri",
"quantitative mri",
"reconstruction",
"segmentation",
"detection",
"arxiv:2203.06823",
"region:us"
] | 2022-04-10T16:16:33+00:00 | {"language": "en", "license": "other", "tags": ["mri", "quantitative mri", "reconstruction", "segmentation", "detection"]} | 2022-05-02T19:01:34+00:00 | [
"2203.06823"
] | [
"en"
] | TAGS
#language-English #license-other #mri #quantitative mri #reconstruction #segmentation #detection #arxiv-2203.06823 #region-us
| # SKM-TEA Sample Data
This dataset consists of a subset of scans from the SKM-TEA dataset. It can be used to build tutorials / demos with the SKM-TEA dataset.
To access to the full dataset, please follow instructions on Github.
NOTE: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.
## Details
This mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are lzf-compressed to reduce size while maximizing speed for decompression.
## License
By using this dataset, you agree to the Stanford University Dataset Research Use Agreement.
## Reference
If you use this dataset, please reference the SKM-TEA paper:
| [
"# SKM-TEA Sample Data\nThis dataset consists of a subset of scans from the SKM-TEA dataset. It can be used to build tutorials / demos with the SKM-TEA dataset.\n\nTo access to the full dataset, please follow instructions on Github.\n\nNOTE: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.",
"## Details\nThis mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are lzf-compressed to reduce size while maximizing speed for decompression.",
"## License\nBy using this dataset, you agree to the Stanford University Dataset Research Use Agreement.",
"## Reference\nIf you use this dataset, please reference the SKM-TEA paper:"
] | [
"TAGS\n#language-English #license-other #mri #quantitative mri #reconstruction #segmentation #detection #arxiv-2203.06823 #region-us \n",
"# SKM-TEA Sample Data\nThis dataset consists of a subset of scans from the SKM-TEA dataset. It can be used to build tutorials / demos with the SKM-TEA dataset.\n\nTo access to the full dataset, please follow instructions on Github.\n\nNOTE: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.",
"## Details\nThis mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are lzf-compressed to reduce size while maximizing speed for decompression.",
"## License\nBy using this dataset, you agree to the Stanford University Dataset Research Use Agreement.",
"## Reference\nIf you use this dataset, please reference the SKM-TEA paper:"
] |
456a91903148a8a02f7903b4941ef21ef6f7366f | # AirDrums Data
This dataset contains all data needed for training
`2d_images` contains raw unsegmented image data for 2-dimensional dataset. filenames are representative of timestamp
`3d_images` contains raw unsegmented image data (paired) for 3-dimensional dataset. filenames are representative of timestamp and camera angle
images from both of the previous sets are to be segmented and converted to a coordinate and direction
`2d_imu` contains IMU data for training in 2-dimensional space (xy) with segmented images from above
`3d_imu` contains IMU data for training in 3-dimensional space(xyz) with segmented images from above and front (xy and yz planes)
---
language:
- en
tags:
- sensor
- location
datasets:
- 2d_images
- 3d_images
- 2d_imu
- 3d_imu
---
| mattgmcadams/AirDrums | [
"region:us"
] | 2022-04-10T23:22:39+00:00 | {} | 2022-04-10T23:40:23+00:00 | [] | [] | TAGS
#region-us
| # AirDrums Data
This dataset contains all data needed for training
'2d_images' contains raw unsegmented image data for 2-dimensional dataset. filenames are representative of timestamp
'3d_images' contains raw unsegmented image data (paired) for 3-dimensional dataset. filenames are representative of timestamp and camera angle
images from both of the previous sets are to be segmented and converted to a coordinate and direction
'2d_imu' contains IMU data for training in 2-dimensional space (xy) with segmented images from above
'3d_imu' contains IMU data for training in 3-dimensional space(xyz) with segmented images from above and front (xy and yz planes)
---
language:
- en
tags:
- sensor
- location
datasets:
- 2d_images
- 3d_images
- 2d_imu
- 3d_imu
---
| [
"# AirDrums Data\nThis dataset contains all data needed for training\n\n'2d_images' contains raw unsegmented image data for 2-dimensional dataset. filenames are representative of timestamp\n\n'3d_images' contains raw unsegmented image data (paired) for 3-dimensional dataset. filenames are representative of timestamp and camera angle\n\nimages from both of the previous sets are to be segmented and converted to a coordinate and direction\n\n'2d_imu' contains IMU data for training in 2-dimensional space (xy) with segmented images from above\n\n'3d_imu' contains IMU data for training in 3-dimensional space(xyz) with segmented images from above and front (xy and yz planes)\n\n\n---\nlanguage: \n- en\n\ntags:\n- sensor\n- location\n\ndatasets:\n- 2d_images\n- 3d_images\n- 2d_imu\n- 3d_imu\n---"
] | [
"TAGS\n#region-us \n",
"# AirDrums Data\nThis dataset contains all data needed for training\n\n'2d_images' contains raw unsegmented image data for 2-dimensional dataset. filenames are representative of timestamp\n\n'3d_images' contains raw unsegmented image data (paired) for 3-dimensional dataset. filenames are representative of timestamp and camera angle\n\nimages from both of the previous sets are to be segmented and converted to a coordinate and direction\n\n'2d_imu' contains IMU data for training in 2-dimensional space (xy) with segmented images from above\n\n'3d_imu' contains IMU data for training in 3-dimensional space(xyz) with segmented images from above and front (xy and yz planes)\n\n\n---\nlanguage: \n- en\n\ntags:\n- sensor\n- location\n\ndatasets:\n- 2d_images\n- 3d_images\n- 2d_imu\n- 3d_imu\n---"
] |
4fcdce42bb4668907d572c4ae6ac03307847a7ff |
### About DataSet
The dataset based on NEREL corpus.
For more information about original data, please visit this [source](https://github.com/dialogue-evaluation/RuNNE)
Example of preparing original data illustrated in <Prepare_original_data.ipynb>
### Additional info
The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-".
Frequency for each entity:
- I-AGE: 284
- B-AGE: 247
- B-AWARD: 285
- I-AWARD: 466
- B-CITY: 1080
- I-CITY: 39
- B-COUNTRY: 2378
- I-COUNTRY: 128
- B-CRIME: 214
- I-CRIME: 372
- B-DATE: 2701
- I-DATE: 5437
- B-DISEASE: 136
- I-DISEASE: 80
- B-DISTRICT: 98
- I-DISTRICT: 73
- B-EVENT: 3369
- I-EVENT: 2524
- B-FACILITY: 376
- I-FACILITY: 510
- B-FAMILY: 27
- I-FAMILY: 22
- B-IDEOLOGY: 271
- I-IDEOLOGY: 20
- B-LANGUAGE: 32
- I-LAW: 1196
- B-LAW: 297
- B-LOCATION: 242
- I-LOCATION: 139
- B-MONEY: 147
- I-MONEY: 361
- B-NATIONALITY: 437
- I-NATIONALITY: 41
- B-NUMBER: 1079
- I-NUMBER: 328
- B-ORDINAL: 485
- I-ORDINAL: 6
- B-ORGANIZATION: 3339
- I-ORGANIZATION: 3354
- B-PENALTY: 73
- I-PENALTY: 104
- B-PERCENT: 51
- I-PERCENT: 37
- B-PERSON: 5148
- I-PERSON: 3635
- I-PRODUCT: 48
- B-PRODUCT: 197
- B-PROFESSION: 3869
- I-PROFESSION: 2598
- B-RELIGION: 102
- I-RELIGION: 1
- B-STATE_OR_PROVINCE: 436
- I-STATE_OR_PROVINCE: 154
- B-TIME: 187
- I-TIME: 529
- B-WORK_OF_ART: 133
- I-WORK_OF_ART: 194
You can find mapper for entity ids in <id_to_label_map.pickle> file:
```python
import pickle
with open('id_to_label_map.pickle', 'rb') as f:
mapper = pickle.load(f)
``` | surdan/nerel_short | [
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-04-11T05:34:28+00:00 | {"language": "ru", "multilinguality": "monolingual", "task_ids": ["named-entity-recognition"]} | 2022-10-25T09:06:49+00:00 | [] | [
"ru"
] | TAGS
#task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #region-us
|
### About DataSet
The dataset based on NEREL corpus.
For more information about original data, please visit this source
Example of preparing original data illustrated in <Prepare_original_data.ipynb>
### Additional info
The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-".
Frequency for each entity:
- I-AGE: 284
- B-AGE: 247
- B-AWARD: 285
- I-AWARD: 466
- B-CITY: 1080
- I-CITY: 39
- B-COUNTRY: 2378
- I-COUNTRY: 128
- B-CRIME: 214
- I-CRIME: 372
- B-DATE: 2701
- I-DATE: 5437
- B-DISEASE: 136
- I-DISEASE: 80
- B-DISTRICT: 98
- I-DISTRICT: 73
- B-EVENT: 3369
- I-EVENT: 2524
- B-FACILITY: 376
- I-FACILITY: 510
- B-FAMILY: 27
- I-FAMILY: 22
- B-IDEOLOGY: 271
- I-IDEOLOGY: 20
- B-LANGUAGE: 32
- I-LAW: 1196
- B-LAW: 297
- B-LOCATION: 242
- I-LOCATION: 139
- B-MONEY: 147
- I-MONEY: 361
- B-NATIONALITY: 437
- I-NATIONALITY: 41
- B-NUMBER: 1079
- I-NUMBER: 328
- B-ORDINAL: 485
- I-ORDINAL: 6
- B-ORGANIZATION: 3339
- I-ORGANIZATION: 3354
- B-PENALTY: 73
- I-PENALTY: 104
- B-PERCENT: 51
- I-PERCENT: 37
- B-PERSON: 5148
- I-PERSON: 3635
- I-PRODUCT: 48
- B-PRODUCT: 197
- B-PROFESSION: 3869
- I-PROFESSION: 2598
- B-RELIGION: 102
- I-RELIGION: 1
- B-STATE_OR_PROVINCE: 436
- I-STATE_OR_PROVINCE: 154
- B-TIME: 187
- I-TIME: 529
- B-WORK_OF_ART: 133
- I-WORK_OF_ART: 194
You can find mapper for entity ids in <id_to_label_map.pickle> file:
| [
"### About DataSet\n\nThe dataset based on NEREL corpus.\n\nFor more information about original data, please visit this source\n\nExample of preparing original data illustrated in <Prepare_original_data.ipynb>",
"### Additional info\n\nThe dataset consist 29 entities, each of them can be as beginner part of entity \"B-\" as inner \"I-\".\n\nFrequency for each entity:\n\n- I-AGE: 284\n- B-AGE: 247\n- B-AWARD: 285\n- I-AWARD: 466\n- B-CITY: 1080\n- I-CITY: 39\n- B-COUNTRY: 2378\n- I-COUNTRY: 128\n- B-CRIME: 214\n- I-CRIME: 372\n- B-DATE: 2701\n- I-DATE: 5437\n- B-DISEASE: 136\n- I-DISEASE: 80\n- B-DISTRICT: 98\n- I-DISTRICT: 73\n- B-EVENT: 3369\n- I-EVENT: 2524\n- B-FACILITY: 376\n- I-FACILITY: 510\n- B-FAMILY: 27\n- I-FAMILY: 22\n- B-IDEOLOGY: 271\n- I-IDEOLOGY: 20\n- B-LANGUAGE: 32\n- I-LAW: 1196\n- B-LAW: 297\n- B-LOCATION: 242\n- I-LOCATION: 139\n- B-MONEY: 147\n- I-MONEY: 361\n- B-NATIONALITY: 437\n- I-NATIONALITY: 41\n- B-NUMBER: 1079\n- I-NUMBER: 328\n- B-ORDINAL: 485\n- I-ORDINAL: 6\n- B-ORGANIZATION: 3339\n- I-ORGANIZATION: 3354\n- B-PENALTY: 73\n- I-PENALTY: 104\n- B-PERCENT: 51\n- I-PERCENT: 37\n- B-PERSON: 5148\n- I-PERSON: 3635\n- I-PRODUCT: 48\n- B-PRODUCT: 197\n- B-PROFESSION: 3869\n- I-PROFESSION: 2598\n- B-RELIGION: 102\n- I-RELIGION: 1\n- B-STATE_OR_PROVINCE: 436\n- I-STATE_OR_PROVINCE: 154\n- B-TIME: 187\n- I-TIME: 529\n- B-WORK_OF_ART: 133\n- I-WORK_OF_ART: 194\n\nYou can find mapper for entity ids in <id_to_label_map.pickle> file:"
] | [
"TAGS\n#task_ids-named-entity-recognition #multilinguality-monolingual #language-Russian #region-us \n",
"### About DataSet\n\nThe dataset based on NEREL corpus.\n\nFor more information about original data, please visit this source\n\nExample of preparing original data illustrated in <Prepare_original_data.ipynb>",
"### Additional info\n\nThe dataset consist 29 entities, each of them can be as beginner part of entity \"B-\" as inner \"I-\".\n\nFrequency for each entity:\n\n- I-AGE: 284\n- B-AGE: 247\n- B-AWARD: 285\n- I-AWARD: 466\n- B-CITY: 1080\n- I-CITY: 39\n- B-COUNTRY: 2378\n- I-COUNTRY: 128\n- B-CRIME: 214\n- I-CRIME: 372\n- B-DATE: 2701\n- I-DATE: 5437\n- B-DISEASE: 136\n- I-DISEASE: 80\n- B-DISTRICT: 98\n- I-DISTRICT: 73\n- B-EVENT: 3369\n- I-EVENT: 2524\n- B-FACILITY: 376\n- I-FACILITY: 510\n- B-FAMILY: 27\n- I-FAMILY: 22\n- B-IDEOLOGY: 271\n- I-IDEOLOGY: 20\n- B-LANGUAGE: 32\n- I-LAW: 1196\n- B-LAW: 297\n- B-LOCATION: 242\n- I-LOCATION: 139\n- B-MONEY: 147\n- I-MONEY: 361\n- B-NATIONALITY: 437\n- I-NATIONALITY: 41\n- B-NUMBER: 1079\n- I-NUMBER: 328\n- B-ORDINAL: 485\n- I-ORDINAL: 6\n- B-ORGANIZATION: 3339\n- I-ORGANIZATION: 3354\n- B-PENALTY: 73\n- I-PENALTY: 104\n- B-PERCENT: 51\n- I-PERCENT: 37\n- B-PERSON: 5148\n- I-PERSON: 3635\n- I-PRODUCT: 48\n- B-PRODUCT: 197\n- B-PROFESSION: 3869\n- I-PROFESSION: 2598\n- B-RELIGION: 102\n- I-RELIGION: 1\n- B-STATE_OR_PROVINCE: 436\n- I-STATE_OR_PROVINCE: 154\n- B-TIME: 187\n- I-TIME: 529\n- B-WORK_OF_ART: 133\n- I-WORK_OF_ART: 194\n\nYou can find mapper for entity ids in <id_to_label_map.pickle> file:"
] |
820e1da2eaf57add263d470621bc2a3f43a021e7 | This dataset contains 10 examples of the [segments/sidewalk-semantic](https://huggingface.co/datasets/segments/sidewalk-semantic) dataset (i.e. 10 images with corresponding ground-truth segmentation maps). | huggingface/semantic-segmentation-test-sample | [
"region:us"
] | 2022-04-11T08:12:00+00:00 | {} | 2022-04-11T08:15:24+00:00 | [] | [] | TAGS
#region-us
| This dataset contains 10 examples of the segments/sidewalk-semantic dataset (i.e. 10 images with corresponding ground-truth segmentation maps). | [] | [
"TAGS\n#region-us \n"
] |
02f598f31161ab47a167d725b0de3dc3c0efdde8 |
## Dataset Description
This dataset provides easier accessibility to the original [MNLI dataset](https://huggingface.co/datasets/multi_nli).
We randomly choose 10% of the original `validation_matched` split and use it as the validation split.
The remaining 90% are used for the test split.
The train split remains unchanged. | westphal-jan/mnli_matched | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"source_datasets:multi_nli",
"region:us"
] | 2022-04-11T09:06:59+00:00 | {"source_datasets": ["multi_nli"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring"]} | 2022-04-16T11:02:51+00:00 | [] | [] | TAGS
#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #source_datasets-multi_nli #region-us
|
## Dataset Description
This dataset provides easier accessibility to the original MNLI dataset.
We randomly choose 10% of the original 'validation_matched' split and use it as the validation split.
The remaining 90% are used for the test split.
The train split remains unchanged. | [
"## Dataset Description\n\nThis dataset provides easier accessibility to the original MNLI dataset.\n\nWe randomly choose 10% of the original 'validation_matched' split and use it as the validation split.\nThe remaining 90% are used for the test split.\nThe train split remains unchanged."
] | [
"TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #source_datasets-multi_nli #region-us \n",
"## Dataset Description\n\nThis dataset provides easier accessibility to the original MNLI dataset.\n\nWe randomly choose 10% of the original 'validation_matched' split and use it as the validation split.\nThe remaining 90% are used for the test split.\nThe train split remains unchanged."
] |
3b2935a74731f120004bdcbc3f9fd73f7d854c96 |
# Dataset Card for `squad_bn`
## Table of Contents
- [Dataset Card for `squad_bn`](#dataset-card-for-squad_bn)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
- **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
- **Point of Contact:** [Tahmid Hasan](mailto:[email protected])
### Dataset Summary
This is a Question Answering (QA) dataset for Bengali, curated from the [SQuAD 2.0](), [TyDI-QA]() datasets and using the state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglabert)
### Languages
* `Bengali`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/squad_bn")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
"title": "শেখ মুজিবুর রহমান",
"paragraphs": [
{
"qas": [
{
"answers": [
{
"answer_start": 19,
"text": "১৭ মার্চ ১৯২০"
}
],
"id": "bengali--981248442377505718-0-2649",
"question": "শেখ মুজিবুর রহমান কবে জন্মগ্রহণ করেন ?"
}
],
"context": "শেখ মুজিবুর রহমান (১৭ মার্চ ১৯২০ - ১৫ আগস্ট ১৯৭৫) বাংলাদেশের প্রথম রাষ্ট্রপতি ও ভারতীয় উপমহাদেশের একজন অন্যতম প্রভাবশালী রাজনৈতিক ব্যক্তিত্ব যিনি বাঙালীর অধিকার রক্ষায় ব্রিটিশ ভারত থেকে ভারত বিভাজন আন্দোলন এবং পরবর্তীতে পূর্ব পাকিস্তান থেকে বাংলাদেশ প্রতিষ্ঠার সংগ্রামে নেতৃত্ব প্রদান করেন। প্রাচীন বাঙ্গালি সভ্যতার আধুনিক স্থপতি হিসাবে শেখ মুজিবুর রহমানকে বাংলাদেশের জাতির জনক বা জাতির পিতা বলা হয়ে থাকে। তিনি মাওলানা আব্দুল হামিদ খান ভাসানী প্রতিষ্ঠিত আওয়ামী লীগের সভাপতি, বাংলাদেশের প্রথম রাষ্ট্রপতি এবং পরবর্তীতে এদেশের প্রধানমন্ত্রীর দায়িত্ব পালন করেন। জনসাধারণের কাছে তিনি শেখ মুজিব এবং শেখ সাহেব হিসাবে বেশি পরিচিত এবং তার উপাধি বঙ্গবন্ধু। তার কন্যা শেখ হাসিনা বাংলাদেশ আওয়ামী লীগের বর্তমান সভানেত্রী এবং বাংলাদেশের বর্তমান প্রধানমন্ত্রী।"
}
]
}
```
### Data Fields
The data fields are as follows:
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| split |count |
|----------|--------|
|`train`| 127771 |
|`validation`| 2502 |
|`test`| 2504 |
## Dataset Creation
For the training set, we translated the complete [SQuAD 2.0](https://aclanthology.org/N18-1101/) dataset using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.
Since the TyDI-QA Gold Passage task guarantees that the given context contains the answer and we want to pose our QA task analogous to SQuAD 2.0, we also consider examples from the Passage selection task that don't have an answer for the given question. We distribute the resultant examples from the TyDI-QA training and validation sets (which are publicly available) evenly to our test and validation sets.
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglabert)
### Source Data
[SQuAD 2.0](https://arxiv.org/abs/1606.05250), [TyDi-QA](https://arxiv.org/abs/2003.05002)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglabert)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglabert)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglabert)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglabert)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglabert)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@misc{bhattacharjee2021banglabert,
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
year={2021},
eprint={2101.00204},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. | csebuetnlp/squad_bn | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bn",
"license:cc-by-nc-sa-4.0",
"arxiv:2101.00204",
"arxiv:2007.01852",
"arxiv:1606.05250",
"arxiv:2003.05002",
"region:us"
] | 2022-04-11T09:16:26+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["bn"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"]} | 2022-08-21T12:17:43+00:00 | [
"2101.00204",
"2007.01852",
"1606.05250",
"2003.05002"
] | [
"bn"
] | TAGS
#task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended #language-Bengali #license-cc-by-nc-sa-4.0 #arxiv-2101.00204 #arxiv-2007.01852 #arxiv-1606.05250 #arxiv-2003.05002 #region-us
| Dataset Card for 'squad\_bn'
============================
Table of Contents
-----------------
* Dataset Card for 'squad\_bn'
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Usage
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Repository: URL
* Paper: "BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"
* Point of Contact: Tahmid Hasan
### Dataset Summary
This is a Question Answering (QA) dataset for Bengali, curated from the SQuAD 2.0, TyDI-QA datasets and using the state-of-the-art English to Bengali translation model introduced here.
### Supported Tasks and Leaderboards
More information needed
### Languages
* 'Bengali'
### Usage
Dataset Structure
-----------------
### Data Instances
One example from the dataset is given below in JSON format.
### Data Fields
The data fields are as follows:
* 'id': a 'string' feature.
* 'title': a 'string' feature.
* 'context': a 'string' feature.
* 'question': a 'string' feature.
* 'answers': a dictionary feature containing:
+ 'text': a 'string' feature.
+ 'answer\_start': a 'int32' feature.
### Data Splits
Dataset Creation
----------------
For the training set, we translated the complete SQuAD 2.0 dataset using the English to Bangla translation model introduced here. Due to the possibility of incursions of error during automatic translation, we used the Language-Agnostic BERT Sentence Embeddings (LaBSE) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.
Since the TyDI-QA Gold Passage task guarantees that the given context contains the answer and we want to pose our QA task analogous to SQuAD 2.0, we also consider examples from the Passage selection task that don't have an answer for the given question. We distribute the resultant examples from the TyDI-QA training and validation sets (which are publicly available) evenly to our test and validation sets.
### Curation Rationale
More information needed
### Source Data
SQuAD 2.0, TyDi-QA
#### Initial Data Collection and Normalization
More information needed
#### Who are the source language producers?
More information needed
### Annotations
More information needed
#### Annotation process
More information needed
#### Who are the annotators?
More information needed
### Personal and Sensitive Information
More information needed
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
More information needed
### Discussion of Biases
More information needed
### Other Known Limitations
More information needed
Additional Information
----------------------
### Dataset Curators
More information needed
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.
If you use the dataset, please cite the following paper:
### Contributions
Thanks to @abhik1505040 and @Tahmid for adding this dataset.
| [
"### Dataset Summary\n\n\nThis is a Question Answering (QA) dataset for Bengali, curated from the SQuAD 2.0, TyDI-QA datasets and using the state-of-the-art English to Bengali translation model introduced here.",
"### Supported Tasks and Leaderboards\n\n\nMore information needed",
"### Languages\n\n\n* 'Bengali'",
"### Usage\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne example from the dataset is given below in JSON format.",
"### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nFor the training set, we translated the complete SQuAD 2.0 dataset using the English to Bangla translation model introduced here. Due to the possibility of incursions of error during automatic translation, we used the Language-Agnostic BERT Sentence Embeddings (LaBSE) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.\n\n\nSince the TyDI-QA Gold Passage task guarantees that the given context contains the answer and we want to pose our QA task analogous to SQuAD 2.0, we also consider examples from the Passage selection task that don't have an answer for the given question. We distribute the resultant examples from the TyDI-QA training and validation sets (which are publicly available) evenly to our test and validation sets.",
"### Curation Rationale\n\n\nMore information needed",
"### Source Data\n\n\nSQuAD 2.0, TyDi-QA",
"#### Initial Data Collection and Normalization\n\n\nMore information needed",
"#### Who are the source language producers?\n\n\nMore information needed",
"### Annotations\n\n\nMore information needed",
"#### Annotation process\n\n\nMore information needed",
"#### Who are the annotators?\n\n\nMore information needed",
"### Personal and Sensitive Information\n\n\nMore information needed\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nMore information needed",
"### Discussion of Biases\n\n\nMore information needed",
"### Other Known Limitations\n\n\nMore information needed\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nMore information needed",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use the dataset, please cite the following paper:",
"### Contributions\n\n\nThanks to @abhik1505040 and @Tahmid for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended #language-Bengali #license-cc-by-nc-sa-4.0 #arxiv-2101.00204 #arxiv-2007.01852 #arxiv-1606.05250 #arxiv-2003.05002 #region-us \n",
"### Dataset Summary\n\n\nThis is a Question Answering (QA) dataset for Bengali, curated from the SQuAD 2.0, TyDI-QA datasets and using the state-of-the-art English to Bengali translation model introduced here.",
"### Supported Tasks and Leaderboards\n\n\nMore information needed",
"### Languages\n\n\n* 'Bengali'",
"### Usage\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne example from the dataset is given below in JSON format.",
"### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nFor the training set, we translated the complete SQuAD 2.0 dataset using the English to Bangla translation model introduced here. Due to the possibility of incursions of error during automatic translation, we used the Language-Agnostic BERT Sentence Embeddings (LaBSE) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.\n\n\nSince the TyDI-QA Gold Passage task guarantees that the given context contains the answer and we want to pose our QA task analogous to SQuAD 2.0, we also consider examples from the Passage selection task that don't have an answer for the given question. We distribute the resultant examples from the TyDI-QA training and validation sets (which are publicly available) evenly to our test and validation sets.",
"### Curation Rationale\n\n\nMore information needed",
"### Source Data\n\n\nSQuAD 2.0, TyDi-QA",
"#### Initial Data Collection and Normalization\n\n\nMore information needed",
"#### Who are the source language producers?\n\n\nMore information needed",
"### Annotations\n\n\nMore information needed",
"#### Annotation process\n\n\nMore information needed",
"#### Who are the annotators?\n\n\nMore information needed",
"### Personal and Sensitive Information\n\n\nMore information needed\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nMore information needed",
"### Discussion of Biases\n\n\nMore information needed",
"### Other Known Limitations\n\n\nMore information needed\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nMore information needed",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use the dataset, please cite the following paper:",
"### Contributions\n\n\nThanks to @abhik1505040 and @Tahmid for adding this dataset."
] |
2cf230d6428c8e3cb35710b9aa18858cc33084bc | # Dataset Card for [FrozenLake-v1] | AntoineLB/Frozen-lake-dataset | [
"region:us"
] | 2022-04-11T11:55:48+00:00 | {} | 2022-04-21T11:16:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for [FrozenLake-v1] | [
"# Dataset Card for [FrozenLake-v1]"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [FrozenLake-v1]"
] |
643ceefc17441e56cff66f57c03b13615545d42b | Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.
This new dataset has the following advantages over the existing Twitter datasets:
Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements. | raquiba/Sarcasm_News_Headline | [
"region:us"
] | 2022-04-12T02:50:36+00:00 | {} | 2022-04-14T07:19:08+00:00 | [] | [] | TAGS
#region-us
| Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.
This new dataset has the following advantages over the existing Twitter datasets:
Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements. | [] | [
"TAGS\n#region-us \n"
] |
dd723264101153ba5ddf3451e65446346000f496 |
# Inspec Benchmark Dataset for Keyphrase Generation
## About
Inspec is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 2,000 abstracts of scientific papers collected from the [Inspec database](https://www.theiet.org/resources/inspec/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the inspec dataset can be found in the original paper [(Hulth, 2003)][hulth-2003].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset is divided into the following three splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 1,000 | 141.7 | 9.79 | 78.00 | 9.85 | 6.22 | 5.93 |
| Validation | 500 | 132.2 | 9.15 | 77.96 | 9.82 | 6.75 | 5.47 |
| Test | 500 | 134.8 | 9.83 | 78.70 | 9.92 | 6.48 | 4.91 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Hulth, 2003) Anette Hulth. 2003.
[Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028).
In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[hulth-2003]: https://aclanthology.org/W03-1028/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | taln-ls2n/inspec | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | 2022-04-12T07:10:45+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "Inspec"} | 2022-07-21T13:14:59+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-unknown #region-us
| Inspec Benchmark Dataset for Keyphrase Generation
=================================================
About
-----
Inspec is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 2,000 abstracts of scientific papers collected from the Inspec database.
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the inspec dataset can be found in the original paper [(Hulth, 2003)](URL).
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in [(Boudin and Gallina, 2021)](URL).
Text pre-processing (tokenization) is carried out using 'spacy' ('en\_core\_web\_sm' model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in 'nltk') is applied before reference keyphrases are matched against the source text.
Details about the process can be found in 'URL'.
Content and statistics
----------------------
The dataset is divided into the following three splits:
The following data fields are available :
* id: unique identifier of the document.
* title: title of the document.
* abstract: abstract of the document.
* keyphrases: list of reference keyphrases.
* prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases.
References
----------
* (Hulth, 2003) Anette Hulth. 2003.
Improved automatic keyword extraction given more linguistic knowledge.
In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223.
* (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
| [] | [
"TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-unknown #region-us \n"
] |
29b145026c2ec661f3ad581e42f73c68be4f4a13 |
# Dataset Description
Out of **20,577** human proteins (from [UniProt human proteome](https://www.uniprot.org/proteomes/UP000005640)), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of **12,703** proteins. The uShuffle algorithm ([python pacakge](https://github.com/guma44/ushuffle)) was then used to shuffle these protein sequences while maintaining their singlet distribution.
Afterwards, h-CD-HIT algorithm ([web server](http://weizhong-lab.ucsd.edu/cdhit-web-server/cgi-bin/index.cgi)) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of **11,698** sequences.
# **Citation**
If you use this dataset, please cite our paper:
```
@article {
author = {Geffen, Yaron and Ofran, Yanay and Unger, Ron},
title = {DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterparts},
year = {2022},
doi = {10.1093/bioinformatics/btac474},
URL = {https://doi.org/10.1093/bioinformatics/btac474},
journal = {Bioinformatics}
}
``` | yarongef/human_proteome_singlets | [
"license:mit",
"region:us"
] | 2022-04-12T07:20:31+00:00 | {"license": "mit"} | 2022-09-21T07:45:02+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Dataset Description
Out of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their singlet distribution.
Afterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 11,698 sequences.
# Citation
If you use this dataset, please cite our paper:
| [
"# Dataset Description\n\nOut of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their singlet distribution.\nAfterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 11,698 sequences.",
"# Citation\nIf you use this dataset, please cite our paper:"
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Description\n\nOut of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their singlet distribution.\nAfterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 11,698 sequences.",
"# Citation\nIf you use this dataset, please cite our paper:"
] |
68b2211a56c1f2a2276d10c2d0f31a416ed9a2c9 |
# Dataset Card for Snacks
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Number of images in the train/validation/test splits:
```nohighlight
train 4838
val 955
test 952
total 6745
```
Total images in each category:
```nohighlight
apple 350
banana 350
cake 349
candy 349
carrot 349
cookie 349
doughnut 350
grape 350
hot dog 350
ice cream 350
juice 350
muffin 348
orange 349
pineapple 340
popcorn 260
pretzel 204
salad 350
strawberry 348
waffle 350
watermelon 350
```
To save space in the download, the images were resized so that their smallest side is 256 pixels. All EXIF information was removed.
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
The **credits.csv** file contains the original URL, author information and license for each image.
| Matthijs/snacks | [
"task_categories:image-classification",
"license:cc-by-4.0",
"region:us"
] | 2022-04-12T07:30:24+00:00 | {"license": "cc-by-4.0", "task_categories": ["image-classification", "computer-vision"], "pretty_name": "Snacks"} | 2022-04-12T13:26:59+00:00 | [] | [] | TAGS
#task_categories-image-classification #license-cc-by-4.0 #region-us
|
# Dataset Card for Snacks
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book Machine Learning by Tutorials.
The images were taken from the Google Open Images dataset, release 2017_11.
## Dataset Structure
Number of images in the train/validation/test splits:
Total images in each category:
To save space in the download, the images were resized so that their smallest side is 256 pixels. All EXIF information was removed.
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a CC BY 2.0 license.
The annotations are licensed by Google Inc. under a CC BY 4.0 license.
The URL file contains the original URL, author information and license for each image.
| [
"# Dataset Card for Snacks",
"## Dataset Summary\n\nThis is a dataset of 20 different types of snack foods that accompanies the book Machine Learning by Tutorials.\n\nThe images were taken from the Google Open Images dataset, release 2017_11.",
"## Dataset Structure\n\nNumber of images in the train/validation/test splits:\n\n\n\nTotal images in each category:\n\n\n\nTo save space in the download, the images were resized so that their smallest side is 256 pixels. All EXIF information was removed.",
"### Data Splits\n\nTrain, Test, Validation",
"## Licensing Information\n\nJust like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license. \n\nThe images are listed as having a CC BY 2.0 license. \n\nThe annotations are licensed by Google Inc. under a CC BY 4.0 license. \n\nThe URL file contains the original URL, author information and license for each image."
] | [
"TAGS\n#task_categories-image-classification #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Snacks",
"## Dataset Summary\n\nThis is a dataset of 20 different types of snack foods that accompanies the book Machine Learning by Tutorials.\n\nThe images were taken from the Google Open Images dataset, release 2017_11.",
"## Dataset Structure\n\nNumber of images in the train/validation/test splits:\n\n\n\nTotal images in each category:\n\n\n\nTo save space in the download, the images were resized so that their smallest side is 256 pixels. All EXIF information was removed.",
"### Data Splits\n\nTrain, Test, Validation",
"## Licensing Information\n\nJust like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license. \n\nThe images are listed as having a CC BY 2.0 license. \n\nThe annotations are licensed by Google Inc. under a CC BY 4.0 license. \n\nThe URL file contains the original URL, author information and license for each image."
] |
14aba009b5fcd97b1a9ee6f3e3b0da0e308cf7cb |
### Dataset Summary
This dataset is extracted from Fever dataset (https://fever.ai), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | mwong/fever-evidence-related | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-12T07:39:59+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|fever"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "fever", "pretty_name": "fever"} | 2022-10-25T09:06:51+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
|
### Dataset Summary
This dataset is extracted from Fever dataset (URL), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | [
"### Dataset Summary\nThis dataset is extracted from Fever dataset (URL), pre-processed and ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim."
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n",
"### Dataset Summary\nThis dataset is extracted from Fever dataset (URL), pre-processed and ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim."
] |
e53f048856ff4f594e959d75785d2c2d37b678ee |
# Dataset Card for GSM8K
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://openai.com/blog/grade-school-math/
- **Repository:** https://github.com/openai/grade-school-math
- **Paper:** https://arxiv.org/abs/2110.14168
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
- These problems take between 2 and 8 steps to solve.
- Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer.
- A bright middle school student should be able to solve every problem: from the paper, "Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable."
- Solutions are provided in natural language, as opposed to pure math expressions. From the paper: "We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language models’ internal monologues""
### Supported Tasks and Leaderboards
This dataset is generally used to test logic and math in language modelling.
It has been used for many benchmarks, including the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
For the `main` configuration, each instance contains a string for the grade-school level math question and a string for the corresponding answer with multiple steps of reasoning and calculator annotations (explained [here](https://github.com/openai/grade-school-math#calculation-annotations)).
```python
{
'question': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?',
'answer': 'Natalia sold 48/2 = <<48/2=24>>24 clips in May.\nNatalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.\n#### 72',
}
```
For the `socratic` configuration, each instance contains a string for a grade-school level math question, a string for the corresponding answer with multiple steps of reasoning, calculator annotations (explained [here](https://github.com/openai/grade-school-math#calculation-annotations)), and *Socratic sub-questions*.
```python
{
'question': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?',
'answer': 'How many clips did Natalia sell in May? ** Natalia sold 48/2 = <<48/2=24>>24 clips in May.\nHow many clips did Natalia sell altogether in April and May? ** Natalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.\n#### 72',
}
```
### Data Fields
The data fields are the same among `main` and `socratic` configurations and their individual splits.
- question: The question string to a grade school math problem.
- answer: The full solution string to the `question`. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.
### Data Splits
| name |train|validation|
|--------|----:|---------:|
|main | 7473| 1319|
|socratic| 7473| 1319|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
From the paper, appendix A:
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (upwork.com). We then worked with Surge AI (surgehq.ai), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solutions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that contain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Surge AI (surgehq.ai)
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The GSM8K dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```bibtex
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
```
### Contributions
Thanks to [@jon-tow](https://github.com/jon-tow) for adding this dataset. | gsm8k | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"math-word-problems",
"arxiv:2110.14168",
"region:us"
] | 2022-04-12T09:22:10+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]} | 2024-01-04T12:05:15+00:00 | [
"2110.14168"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #math-word-problems #arxiv-2110.14168 #region-us
| Dataset Card for GSM8K
======================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard:
* Point of Contact:
### Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
* These problems take between 2 and 8 steps to solve.
* Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer.
* A bright middle school student should be able to solve every problem: from the paper, "Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable."
* Solutions are provided in natural language, as opposed to pure math expressions. From the paper: "We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language models’ internal monologues""
### Supported Tasks and Leaderboards
This dataset is generally used to test logic and math in language modelling.
It has been used for many benchmarks, including the LLM Leaderboard.
### Languages
The text in the dataset is in English. The associated BCP-47 code is 'en'.
Dataset Structure
-----------------
### Data Instances
For the 'main' configuration, each instance contains a string for the grade-school level math question and a string for the corresponding answer with multiple steps of reasoning and calculator annotations (explained here).
For the 'socratic' configuration, each instance contains a string for a grade-school level math question, a string for the corresponding answer with multiple steps of reasoning, calculator annotations (explained here), and *Socratic sub-questions*.
### Data Fields
The data fields are the same among 'main' and 'socratic' configurations and their individual splits.
* question: The question string to a grade school math problem.
* answer: The full solution string to the 'question'. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
From the paper, appendix A:
>
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (URL). We then worked with Surge AI (URL), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solutions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that contain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.
>
>
>
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
Surge AI (URL)
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
The GSM8K dataset is licensed under the MIT License.
### Contributions
Thanks to @jon-tow for adding this dataset.
| [
"### Dataset Summary\n\n\nGSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.\n\n\n* These problems take between 2 and 8 steps to solve.\n* Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer.\n* A bright middle school student should be able to solve every problem: from the paper, \"Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable.\"\n* Solutions are provided in natural language, as opposed to pure math expressions. From the paper: \"We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language models’ internal monologues\"\"",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is generally used to test logic and math in language modelling.\nIt has been used for many benchmarks, including the LLM Leaderboard.",
"### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor the 'main' configuration, each instance contains a string for the grade-school level math question and a string for the corresponding answer with multiple steps of reasoning and calculator annotations (explained here).\n\n\nFor the 'socratic' configuration, each instance contains a string for a grade-school level math question, a string for the corresponding answer with multiple steps of reasoning, calculator annotations (explained here), and *Socratic sub-questions*.",
"### Data Fields\n\n\nThe data fields are the same among 'main' and 'socratic' configurations and their individual splits.\n\n\n* question: The question string to a grade school math problem.\n* answer: The full solution string to the 'question'. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nFrom the paper, appendix A:\n\n\n\n> \n> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (URL). We then worked with Surge AI (URL), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solutions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that contain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.\n> \n> \n>",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nSurge AI (URL)",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThe GSM8K dataset is licensed under the MIT License.",
"### Contributions\n\n\nThanks to @jon-tow for adding this dataset."
] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #math-word-problems #arxiv-2110.14168 #region-us \n",
"### Dataset Summary\n\n\nGSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.\n\n\n* These problems take between 2 and 8 steps to solve.\n* Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer.\n* A bright middle school student should be able to solve every problem: from the paper, \"Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable.\"\n* Solutions are provided in natural language, as opposed to pure math expressions. From the paper: \"We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language models’ internal monologues\"\"",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is generally used to test logic and math in language modelling.\nIt has been used for many benchmarks, including the LLM Leaderboard.",
"### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor the 'main' configuration, each instance contains a string for the grade-school level math question and a string for the corresponding answer with multiple steps of reasoning and calculator annotations (explained here).\n\n\nFor the 'socratic' configuration, each instance contains a string for a grade-school level math question, a string for the corresponding answer with multiple steps of reasoning, calculator annotations (explained here), and *Socratic sub-questions*.",
"### Data Fields\n\n\nThe data fields are the same among 'main' and 'socratic' configurations and their individual splits.\n\n\n* question: The question string to a grade school math problem.\n* answer: The full solution string to the 'question'. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nFrom the paper, appendix A:\n\n\n\n> \n> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (URL). We then worked with Surge AI (URL), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solutions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that contain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.\n> \n> \n>",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nSurge AI (URL)",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThe GSM8K dataset is licensed under the MIT License.",
"### Contributions\n\n\nThanks to @jon-tow for adding this dataset."
] |
d7ec87c8656a85d795e41f727f9ef2286310eee9 |
# Dataset Card for SBU Captioned Photo Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.rice.edu/~vo9/sbucaptions/
- **Repository:**
- **Paper:** [Im2Text: Describing Images Using 1 Million Captioned Photographs](https://papers.nips.cc/paper/2011/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html)
- **Leaderboard:**
- **Point of Contact:** [Vicente Ordóñez Román](mailto:[email protected])
### Dataset Summary
SBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("sbu_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-to-text`: This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
Each instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:
```
{
'img_url': 'http://static.flickr.com/2723/4385058960_b0f291553e.jpg',
'user_id': '47889917@N08',
'caption': 'A wooden chair in the living room'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
- `user_id`: Author of caption.
### Data Splits
All the data is contained in training split. The training set has 1M instances.
## Dataset Creation
### Curation Rationale
From the paper:
> One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually
relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.
### Source Data
The source images come from Flickr.
#### Initial Data Collection and Normalization
One key contribution of our paper is a novel web-scale database of photographs with associated
descriptive text. To enable effective captioning of novel images, this database must be good in two
ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The
captions associated with the data base photographs must be visually relevant so that transferring
captions between pictures is useful. To achieve the first requirement we query Flickr using a huge
number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very
large, but noisy initial set of photographs with associated text.
#### Who are the source language producers?
The Flickr users.
### Annotations
#### Annotation process
Text descriptions associated with the images are inherited as annotations/captions.
#### Who are the annotators?
The Flickr users.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Vicente Ordonez, Girish Kulkarni and Tamara L. Berg.
### Licensing Information
Not specified.
### Citation Information
```bibtex
@inproceedings{NIPS2011_5dd9db5e,
author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
booktitle = {Advances in Neural Information Processing Systems},
editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
pages = {},
publisher = {Curran Associates, Inc.},
title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
volume = {24},
year = {2011}
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset | sbu_captions | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-04-12T09:41:52+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "paperswithcode_id": "sbu-captions-dataset", "pretty_name": "SBU Captioned Photo Dataset", "dataset_info": {"features": [{"name": "image_url", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 143795586, "num_examples": 1000000}], "download_size": 49787719, "dataset_size": 143795586}} | 2024-01-18T11:19:05+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-unknown #region-us
|
# Dataset Card for SBU Captioned Photo Dataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Dataset Preprocessing
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper: Im2Text: Describing Images Using 1 Million Captioned Photographs
- Leaderboard:
- Point of Contact: Vicente Ordóñez Román
### Dataset Summary
SBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
### Supported Tasks and Leaderboards
- 'image-to-text': This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
Each instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:
### Data Fields
- 'image_url': Static URL for downloading the image associated with the post.
- 'caption': Textual description of the image.
- 'user_id': Author of caption.
### Data Splits
All the data is contained in training split. The training set has 1M instances.
## Dataset Creation
### Curation Rationale
From the paper:
> One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually
relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.
### Source Data
The source images come from Flickr.
#### Initial Data Collection and Normalization
One key contribution of our paper is a novel web-scale database of photographs with associated
descriptive text. To enable effective captioning of novel images, this database must be good in two
ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The
captions associated with the data base photographs must be visually relevant so that transferring
captions between pictures is useful. To achieve the first requirement we query Flickr using a huge
number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very
large, but noisy initial set of photographs with associated text.
#### Who are the source language producers?
The Flickr users.
### Annotations
#### Annotation process
Text descriptions associated with the images are inherited as annotations/captions.
#### Who are the annotators?
The Flickr users.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Vicente Ordonez, Girish Kulkarni and Tamara L. Berg.
### Licensing Information
Not specified.
### Contributions
Thanks to @thomasw21 for adding this dataset | [
"# Dataset Card for SBU Captioned Photo Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: Im2Text: Describing Images Using 1 Million Captioned Photographs\n- Leaderboard:\n- Point of Contact: Vicente Ordóñez Román",
"### Dataset Summary\n\nSBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.",
"### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:",
"### Supported Tasks and Leaderboards\n\n- 'image-to-text': This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.",
"### Languages\n\nAll captions are in English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:",
"### Data Fields\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.\n- 'user_id': Author of caption.",
"### Data Splits\n\nAll the data is contained in training split. The training set has 1M instances.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n> One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually\nrelevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.",
"### Source Data\n\nThe source images come from Flickr.",
"#### Initial Data Collection and Normalization\n\nOne key contribution of our paper is a novel web-scale database of photographs with associated\ndescriptive text. To enable effective captioning of novel images, this database must be good in two\nways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The\ncaptions associated with the data base photographs must be visually relevant so that transferring\ncaptions between pictures is useful. To achieve the first requirement we query Flickr using a huge\nnumber of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very\nlarge, but noisy initial set of photographs with associated text.",
"#### Who are the source language producers?\n\nThe Flickr users.",
"### Annotations",
"#### Annotation process\n\nText descriptions associated with the images are inherited as annotations/captions.",
"#### Who are the annotators?\n\nThe Flickr users.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nVicente Ordonez, Girish Kulkarni and Tamara L. Berg.",
"### Licensing Information\n\nNot specified.",
"### Contributions\n\nThanks to @thomasw21 for adding this dataset"
] | [
"TAGS\n#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-unknown #region-us \n",
"# Dataset Card for SBU Captioned Photo Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: Im2Text: Describing Images Using 1 Million Captioned Photographs\n- Leaderboard:\n- Point of Contact: Vicente Ordóñez Román",
"### Dataset Summary\n\nSBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.",
"### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:",
"### Supported Tasks and Leaderboards\n\n- 'image-to-text': This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.",
"### Languages\n\nAll captions are in English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:",
"### Data Fields\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.\n- 'user_id': Author of caption.",
"### Data Splits\n\nAll the data is contained in training split. The training set has 1M instances.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n> One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually\nrelevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.",
"### Source Data\n\nThe source images come from Flickr.",
"#### Initial Data Collection and Normalization\n\nOne key contribution of our paper is a novel web-scale database of photographs with associated\ndescriptive text. To enable effective captioning of novel images, this database must be good in two\nways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The\ncaptions associated with the data base photographs must be visually relevant so that transferring\ncaptions between pictures is useful. To achieve the first requirement we query Flickr using a huge\nnumber of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very\nlarge, but noisy initial set of photographs with associated text.",
"#### Who are the source language producers?\n\nThe Flickr users.",
"### Annotations",
"#### Annotation process\n\nText descriptions associated with the images are inherited as annotations/captions.",
"#### Who are the annotators?\n\nThe Flickr users.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nVicente Ordonez, Girish Kulkarni and Tamara L. Berg.",
"### Licensing Information\n\nNot specified.",
"### Contributions\n\nThanks to @thomasw21 for adding this dataset"
] |
4e2d0dd3b956ed9e3c623e087a9e800afee79572 |
# Dataset Description
Out of **20,577** human proteins (from [UniProt human proteome](https://www.uniprot.org/proteomes/UP000005640)), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of **12,703** proteins. The uShuffle algorithm ([python pacakge](https://github.com/guma44/ushuffle)) was then used to shuffle these protein sequences while maintaining their doublet distribution. The very few sequences for which uShuffle failed to create a shuffled version were eliminated.
Afterwards, h-CD-HIT algorithm ([web server](http://weizhong-lab.ucsd.edu/cdhit-web-server/cgi-bin/index.cgi)) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of **11,658** sequences.
# Citation
If you use this dataset, please cite our paper:
```
@article {
author = {Geffen, Yaron and Ofran, Yanay and Unger, Ron},
title = {DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterparts},
year = {2022},
doi = {10.1093/bioinformatics/btac474},
URL = {https://doi.org/10.1093/bioinformatics/btac474},
journal = {Bioinformatics}
}
``` | yarongef/human_proteome_doublets | [
"license:mit",
"region:us"
] | 2022-04-12T09:42:22+00:00 | {"license": "mit"} | 2022-09-21T07:43:43+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Dataset Description
Out of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their doublet distribution. The very few sequences for which uShuffle failed to create a shuffled version were eliminated.
Afterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 11,658 sequences.
If you use this dataset, please cite our paper:
| [
"# Dataset Description\nOut of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their doublet distribution. The very few sequences for which uShuffle failed to create a shuffled version were eliminated.\nAfterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 11,658 sequences.\n\nIf you use this dataset, please cite our paper:"
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Description\nOut of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their doublet distribution. The very few sequences for which uShuffle failed to create a shuffled version were eliminated.\nAfterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 11,658 sequences.\n\nIf you use this dataset, please cite our paper:"
] |
7a048b602b00c84360b440c8f13b198b67c14ae4 |
# Dataset Description
Out of **20,577** human proteins (from [UniProt human proteome](https://www.uniprot.org/proteomes/UP000005640)), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of **12,703** proteins. The uShuffle algorithm ([python pacakge](https://github.com/guma44/ushuffle)) was then used to shuffle these protein sequences while maintaining their triplet distribution. The sequences for which uShuffle failed to create a shuffled version were eliminated.
Afterwards, h-CD-HIT algorithm ([web server](http://weizhong-lab.ucsd.edu/cdhit-web-server/cgi-bin/index.cgi)) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of **3,688** sequences.
# Citation
If you use this dataset, please cite our paper:
```
@article {
author = {Geffen, Yaron and Ofran, Yanay and Unger, Ron},
title = {DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterparts},
year = {2022},
doi = {10.1093/bioinformatics/btac474},
URL = {https://doi.org/10.1093/bioinformatics/btac474},
journal = {Bioinformatics}
}
``` | yarongef/human_proteome_triplets | [
"license:mit",
"region:us"
] | 2022-04-12T09:44:30+00:00 | {"license": "mit"} | 2022-09-21T07:44:27+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Dataset Description
Out of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their triplet distribution. The sequences for which uShuffle failed to create a shuffled version were eliminated.
Afterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 3,688 sequences.
If you use this dataset, please cite our paper:
| [
"# Dataset Description\nOut of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their triplet distribution. The sequences for which uShuffle failed to create a shuffled version were eliminated.\nAfterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 3,688 sequences.\n\nIf you use this dataset, please cite our paper:"
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Description\nOut of 20,577 human proteins (from UniProt human proteome), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of 12,703 proteins. The uShuffle algorithm (python pacakge) was then used to shuffle these protein sequences while maintaining their triplet distribution. The sequences for which uShuffle failed to create a shuffled version were eliminated.\nAfterwards, h-CD-HIT algorithm (web server) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of 3,688 sequences.\n\nIf you use this dataset, please cite our paper:"
] |
4a4b251e2258a5d44e0b258c1ea7b026d4e7147e |
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | mwong/climate-evidence-related | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-12T09:58:49+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_fever"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "climate-fever", "pretty_name": "climate-fever"} | 2022-10-25T09:06:54+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
|
### Dataset Summary
This dataset is extracted from Climate Fever dataset (URL pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | [
"### Dataset Summary\nThis dataset is extracted from Climate Fever dataset (URL pre-processed and ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim."
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n",
"### Dataset Summary\nThis dataset is extracted from Climate Fever dataset (URL pre-processed and ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim."
] |
15e4d1933a321088880215257aa25b621a091335 | This dataset is the subset of original eli5 dataset available on hugging face | Pavithree/eli5 | [
"region:us"
] | 2022-04-12T11:31:47+00:00 | {} | 2022-04-23T07:38:44+00:00 | [] | [] | TAGS
#region-us
| This dataset is the subset of original eli5 dataset available on hugging face | [] | [
"TAGS\n#region-us \n"
] |
ca047290468c7f565f248c16139d2a096230112b |
# Dataset Card for Snacks (Detection)
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Included in the **data** folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations.
The columns in the CSV files are:
- `image_id`: the filename of the image without the .jpg extension
- `x_min, x_max, y_min, y_max`: normalized bounding box coordinates, i.e. in the range [0, 1]
- `class_name`: the class that belongs to the bounding box
- `folder`: the class that belongs to the image as a whole, which is also the name of the folder that contains the image
The class names are:
```nohighlight
apple
banana
cake
candy
carrot
cookie
doughnut
grape
hot dog
ice cream
juice
muffin
orange
pineapple
popcorn
pretzel
salad
strawberry
waffle
watermelon
```
**Note:** The image files are not part of this repo but [can be found here](https://huggingface.co/datasets/Matthijs/snacks).
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
| Matthijs/snacks-detection | [
"task_categories:object-detection",
"license:cc-by-4.0",
"region:us"
] | 2022-04-12T13:17:41+00:00 | {"license": "cc-by-4.0", "task_categories": ["object-detection", "computer-vision"], "pretty_name": "Snacks (Detection)"} | 2022-04-12T13:26:04+00:00 | [] | [] | TAGS
#task_categories-object-detection #license-cc-by-4.0 #region-us
|
# Dataset Card for Snacks (Detection)
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book Machine Learning by Tutorials.
The images were taken from the Google Open Images dataset, release 2017_11.
## Dataset Structure
Included in the data folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations.
The columns in the CSV files are:
- 'image_id': the filename of the image without the .jpg extension
- 'x_min, x_max, y_min, y_max': normalized bounding box coordinates, i.e. in the range [0, 1]
- 'class_name': the class that belongs to the bounding box
- 'folder': the class that belongs to the image as a whole, which is also the name of the folder that contains the image
The class names are:
Note: The image files are not part of this repo but can be found here.
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a CC BY 2.0 license.
The annotations are licensed by Google Inc. under a CC BY 4.0 license.
| [
"# Dataset Card for Snacks (Detection)",
"## Dataset Summary\n\nThis is a dataset of 20 different types of snack foods that accompanies the book Machine Learning by Tutorials.\n\nThe images were taken from the Google Open Images dataset, release 2017_11.",
"## Dataset Structure\n\nIncluded in the data folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations. \n\nThe columns in the CSV files are: \n\n- 'image_id': the filename of the image without the .jpg extension\n- 'x_min, x_max, y_min, y_max': normalized bounding box coordinates, i.e. in the range [0, 1]\n- 'class_name': the class that belongs to the bounding box\n- 'folder': the class that belongs to the image as a whole, which is also the name of the folder that contains the image\n\nThe class names are:\n\n\n\nNote: The image files are not part of this repo but can be found here.",
"### Data Splits\n\nTrain, Test, Validation",
"## Licensing Information\n\nJust like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license. \n\nThe images are listed as having a CC BY 2.0 license. \n\nThe annotations are licensed by Google Inc. under a CC BY 4.0 license."
] | [
"TAGS\n#task_categories-object-detection #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Snacks (Detection)",
"## Dataset Summary\n\nThis is a dataset of 20 different types of snack foods that accompanies the book Machine Learning by Tutorials.\n\nThe images were taken from the Google Open Images dataset, release 2017_11.",
"## Dataset Structure\n\nIncluded in the data folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations. \n\nThe columns in the CSV files are: \n\n- 'image_id': the filename of the image without the .jpg extension\n- 'x_min, x_max, y_min, y_max': normalized bounding box coordinates, i.e. in the range [0, 1]\n- 'class_name': the class that belongs to the bounding box\n- 'folder': the class that belongs to the image as a whole, which is also the name of the folder that contains the image\n\nThe class names are:\n\n\n\nNote: The image files are not part of this repo but can be found here.",
"### Data Splits\n\nTrain, Test, Validation",
"## Licensing Information\n\nJust like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license. \n\nThe images are listed as having a CC BY 2.0 license. \n\nThe annotations are licensed by Google Inc. under a CC BY 4.0 license."
] |
Subsets and Splits