sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
d3d4c6fc780dc8d0de62aeef28a38ba2fbed3606
| images | semantic maps | instance ids | | --- | --- | --- | | available | available | available | ``` dataset-size: 107Mb resolution: 1024x1024 license: ... sample-size: ./pix2pixHD_person_synthesis ├── test_img [10 entries] ├── test_inst [10 entries] ├── test_label [10 entries] ├── train_img [160 entries] ├── train_inst [160 entries] └── train_label [160 entries] ```
arakesh/PennFudanPedestrian-1024x512
[ "region:us" ]
2022-04-12T14:58:09+00:00
{}
2022-04-12T15:14:33+00:00
[]
[]
TAGS #region-us
images: available, semantic maps: available, instance ids: available
[]
[ "TAGS\n#region-us \n" ]
b46b232f33c35877e75081f948a89357ea4f1016
Data source: http://deepglobe.org/ | images | semantic maps | instance ids | | --- | --- | --- | | available | available | n/a | ``` dataset-size: 2.0G resolution: 2448x2448 license: ... sample-size: ./pix2pixHD-deepglobe-synthesis ├── test_img [30 entries] ├── test_label [30 entries] ├── train_img [773 entries] └── train_label [773 entries] ```
arakesh/deepglobe-2448x2448
[ "region:us" ]
2022-04-12T15:20:33+00:00
{}
2022-04-12T16:20:26+00:00
[]
[]
TAGS #region-us
Data source: URL images: available, semantic maps: available, instance ids: n/a
[]
[ "TAGS\n#region-us \n" ]
bdf451283a48e53f34fea37a8ad1c475175308cf
Data source: https://uavid.nl/ | images | semantic maps | instance ids | | --- | --- | --- | | available | available | n/a | ``` dataset-size: 6.1G resolution: mixed (3840x2160, 4096x2060) - because drone cameras are different for different faces. license: ... sample-size: + train: 200 + test: 70 ```
arakesh/uavid-15-hq-mixedres
[ "region:us" ]
2022-04-12T16:04:13+00:00
{}
2022-04-12T16:19:47+00:00
[]
[]
TAGS #region-us
Data source: URL images: available, semantic maps: available, instance ids: n/a
[]
[ "TAGS\n#region-us \n" ]
e5981976381334c87f5917b4743d749726a9e21b
## Problem and Opportunity In the United States, voting is largely a private matter. A registered voter is given a randomized ballot form or machine to prevent linkage between their voting choices and their identity. This disconnect supports confidence in the election process, but it provides obstacles to an election's analysis. A common solution is to field exit polls, interviewing voters immediately after leaving their polling location. This method is rife with bias, however, and functionally limited in direct demographics data collected. For the 2020 general election, though, most states published their election results for each voting location. These publications were additionally supported by the geographical areas assigned to each location, the voting precincts. As a result, geographic processing can now be applied to project precinct election results onto Census block groups. While precinct have few demographic traits directly, their geographies have characteristics that make them projectable onto U.S. Census geographies. Both state voting precincts and U.S. Census block groups: * are exclusive, and do not overlap * are adjacent, fully covering their corresponding state and potentially county * have roughly the same size in area, population and voter presence Analytically, a projection of local demographics does not allow conclusions about voters themselves. However, the dataset does allow statements related to the geographies that yield voting behavior. One could say, for example, that an area dominated by a particular voting pattern would have mean traits of age, race, income or household structure. The dataset that results from this programming provides voting results allocated by Census block groups. The block group identifier can be joined to Census Decennial and American Community Survey demographic estimates.
openenvironments/blockgroupvoting
[ "license:mit", "region:us" ]
2022-04-12T20:15:18+00:00
{"license": "mit"}
2022-04-12T20:19:08+00:00
[]
[]
TAGS #license-mit #region-us
## Problem and Opportunity In the United States, voting is largely a private matter. A registered voter is given a randomized ballot form or machine to prevent linkage between their voting choices and their identity. This disconnect supports confidence in the election process, but it provides obstacles to an election's analysis. A common solution is to field exit polls, interviewing voters immediately after leaving their polling location. This method is rife with bias, however, and functionally limited in direct demographics data collected. For the 2020 general election, though, most states published their election results for each voting location. These publications were additionally supported by the geographical areas assigned to each location, the voting precincts. As a result, geographic processing can now be applied to project precinct election results onto Census block groups. While precinct have few demographic traits directly, their geographies have characteristics that make them projectable onto U.S. Census geographies. Both state voting precincts and U.S. Census block groups: * are exclusive, and do not overlap * are adjacent, fully covering their corresponding state and potentially county * have roughly the same size in area, population and voter presence Analytically, a projection of local demographics does not allow conclusions about voters themselves. However, the dataset does allow statements related to the geographies that yield voting behavior. One could say, for example, that an area dominated by a particular voting pattern would have mean traits of age, race, income or household structure. The dataset that results from this programming provides voting results allocated by Census block groups. The block group identifier can be joined to Census Decennial and American Community Survey demographic estimates.
[ "## Problem and Opportunity\nIn the United States, voting is largely a private matter. A registered voter is given a randomized ballot form or machine to prevent linkage between their voting choices and their identity. This disconnect supports confidence in the election process, but it provides obstacles to an election's analysis. A common solution is to field exit polls, interviewing voters immediately after leaving their polling location. This method is rife with bias, however, and functionally limited in direct demographics data collected. \n\nFor the 2020 general election, though, most states published their election results for each voting location. These publications were additionally supported by the geographical areas assigned to each location, the voting precincts. As a result, geographic processing can now be applied to project precinct election results onto Census block groups. While precinct have few demographic traits directly, their geographies have characteristics that make them projectable onto U.S. Census geographies. Both state voting precincts and U.S. Census block groups:\n* are exclusive, and do not overlap\n* are adjacent, fully covering their corresponding state and potentially county\n* have roughly the same size in area, population and voter presence\n\nAnalytically, a projection of local demographics does not allow conclusions about voters themselves. However, the dataset does allow statements related to the geographies that yield voting behavior. One could say, for example, that an area dominated by a particular voting pattern would have mean traits of age, race, income or household structure.\n\nThe dataset that results from this programming provides voting results allocated by Census block groups. The block group identifier can be joined to Census Decennial and American Community Survey demographic estimates." ]
[ "TAGS\n#license-mit #region-us \n", "## Problem and Opportunity\nIn the United States, voting is largely a private matter. A registered voter is given a randomized ballot form or machine to prevent linkage between their voting choices and their identity. This disconnect supports confidence in the election process, but it provides obstacles to an election's analysis. A common solution is to field exit polls, interviewing voters immediately after leaving their polling location. This method is rife with bias, however, and functionally limited in direct demographics data collected. \n\nFor the 2020 general election, though, most states published their election results for each voting location. These publications were additionally supported by the geographical areas assigned to each location, the voting precincts. As a result, geographic processing can now be applied to project precinct election results onto Census block groups. While precinct have few demographic traits directly, their geographies have characteristics that make them projectable onto U.S. Census geographies. Both state voting precincts and U.S. Census block groups:\n* are exclusive, and do not overlap\n* are adjacent, fully covering their corresponding state and potentially county\n* have roughly the same size in area, population and voter presence\n\nAnalytically, a projection of local demographics does not allow conclusions about voters themselves. However, the dataset does allow statements related to the geographies that yield voting behavior. One could say, for example, that an area dominated by a particular voting pattern would have mean traits of age, race, income or household structure.\n\nThe dataset that results from this programming provides voting results allocated by Census block groups. The block group identifier can be joined to Census Decennial and American Community Survey demographic estimates." ]
58bafe5544a7b2eb3303c7d2ff3ea282a6446ba4
# Dataset Card for HatemojiCheck ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Content Warning This datasets contains examples of hateful language. ## Dataset Description and Details - **Repository:** https://github.com/HannahKirk/Hatemoji - **Paper:** https://arxiv.org/abs/2108.05921 - **Point of Contact:** [email protected] ### Dataset Summary HatemojiCheck is a test suite of 3,930 test cases covering seven functionalities of emoji-based hate and six identities. HatemojiCheck contains the text for each test case and its gold-standard label from majority agreement of three annotators. We provide labels by target of hate. HatemojiCheck can be used to evaluate the robustness of hate speech classifiers to constructions of emoji-based hate. ### Supported Tasks Hate Speech Detection ### Languages English ## Dataset Structure ### Data Instances 3,930 test cases ### Data Fields case_id: The unique ID of the test case (assigned to each of the 3,930 cases generated) templ_id: The unique ID of the template (original=.0, identity perturbation=.1, polarity perturbation=.2, emoji perturbation = .3) from which the test case was generated test_grp_id: The ID of the set of templates (original, identity perturbation, polarity perturbation, no emoji perturbation) from which the test case was generated. text: The text of the test case. target: Where applicable, the protected group targeted or referenced by the test case. We cover six protected groups in the test suite: women, trans people, gay people, black people, disabled people and Muslims. functionality: The shorthand for the functionality tested by the test case. set: Whether the test case is an original statement, a identity perturbation, a polarity perturbation or a no emoji perturbation. label_gold: The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label. unrealistic_flags: The number of annotators (/3) who flagged the test case as unrealistic. included_in_test_suite: Indicator for whether test case is included in final HatemojiCheck test suite. All 3,930 test cases are included. ### Data Splits All of HatemojiCheck is designated for testing models so only test is provided. ## Dataset Creation ### Curation Rationale The purpose of HatemojiCheck is to evaluate the performance of black-box models against varied constructions of emoji-based hate. To construct HatemojiCheck, we hand-crafted 3,930 short form English-language texts using a template-based method for group identities and slurs. Each test case exemplifies one functionality and is associated with a binary gold standard label _hateful_ versus _not hateful_. All 3,930 cases were labeled by a trained team of three annotators, who could also flag examples that were unrealistic. Any test cases with multiple disagreements or flags were replaced with alternative templates and re-issued for annotation to improve the quality of examples in the final set of test cases. ### Source Data #### Initial Data Collection and Normalization Based on the literature, we define a list of potentially hateful emoji and words, and use Twitter's Streaming API to search for the Cartesian products of emoji--emoji and emoji--word pairs over a two week period. To identify different forms of emoji-based hate, we apply a grounded theory approach on a sample of 3,295 tweets, splitting out distinctive categories, and recursively selecting sub-categories until all key parts of the data are captured and the framework is `saturated'. #### Who are the source language producers? All test cases were hand-crafted by the lead author, who is a native English-speaking researcher at a UK university with extensive subject matter expertise in online harms. The test cases are in English. This choice was motivated by the researchers' and annotators' expertise, and to maximize HatemojiCheck's applicability to previous hate speech detection studies, which are predominantly conducted on English-language data. We discuss the limitations of restricting HatemojiCheck to one language and suggest that future work should prioritize expanding the test suite to other languages. ### Annotations #### Annotation process To validate the gold-standard labels assigned to each test case, we recruited three annotators with prior experience on hate speech projects. Annotators were given extensive guidelines, test tasks and training sessions, which included examining real-world examples of emoji-based hate from Twitter. We followed guidance for protecting annotator well-being. There were two iterative rounds of annotation. In the first round, each annotator labeled all 3,930 test cases as hateful or non-hateful, and had the option to flag unrealistic entries. Test cases with any disagreement or unrealistic flags were reviewed by the study authors (n=289). One-on-one interviews were conducted with annotators to identify dataset issues versus annotator error. From 289 test cases, 119 were identified as ambiguous or unrealistic, replaced with alternatives and re-issued to annotators for labeling. No further issues were raised. We measured inter-annotator agreement using Randolph's Kappa, obtaining a value of 0.85 for the final set of test cases, which indicates "almost perfect agreement". #### Who are the annotators? We recruited a team of three annotators who worked for two weeks in May 2021 and were paid £16. All annotators were female and between 30--39 years old. One had an undergraduate degree, one a taught graduate degree and one a post-graduate research degree. There were three nationalities: Argentinian, British and Iraqi, two ethnicities: White and Arab, and three religious affiliations: Catholic, Muslim and None. One annotator was a native English speaker and the others were non-native but fluent. All annotators used emoji and social media more than once per day. All annotators had seen others targeted by abuse online, and one had been targeted personally. ### Personal and Sensitive Information HatemojiCheck contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view. ## Considerations for Using the Data ### Social Impact of Dataset HatemojiCheck contains challenging emoji examples on which commercial solutions and state-of-the-art transformer models have been proven to fail. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to evaluate model's weaknesses to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed. ### Discussion of Biases HatemojiCheck only contains test cases against 6 identities: woman, trans people, gay people, disabled people, Black people and Muslims. It thus is biased towards evaluating hate directed at these targets. Additionally, HatemojiCheck was motivated by an empirical study of English-language tweets. The usage of emoji varies significantly across culture, country and demographic so there may be biases towards Western, English-language use of emoji. ### Other Known Limitations While inspired by real-world instances of emoji-based hate, HatemojiCheck contains synthetic, hand-crafted test cases. These test cases are designed to be a "minimum performance standard" against which to hold models accountable. However, because the test cases are designed to have one "clear, gold-standard label" they may be easier to predict than more nuanced, complex and real-world instances of emoji-based hate. ## Additional Information ### Dataset Curators The dataset was created by the lead author (Hannah Rose Kirk), then validated by the other authors and three annotators. ### Licensing Information Creative Commons Attribution 4.0 International Public License. For full detail see: https://github.com/HannahKirk/Hatemoji/blob/main/LICENSE ### Citation Information If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921. ``` @article{kirk2021hatemoji, title={Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate}, author={Kirk, Hannah Rose and Vidgen, Bertram and R{\"o}ttger, Paul and Thrush, Tristan and Hale, Scott A}, journal={arXiv preprint arXiv:2108.05921}, year={2021} } ``` ### Contributions Thanks to [@HannahKirk](https://github.com/HannahKirk) for adding this dataset.
HannahRoseKirk/HatemojiCheck
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:expert", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "license:cc-by-4.0", "arxiv:2108.05921", "region:us" ]
2022-04-13T07:35:38+00:00
{"annotations_creators": ["expert"], "language_creators": ["expert-generated"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "HatemojiCheck", "languages": ["en"], "extra_gated_prompt": "We have deactivated the automatic preview for this dataset because it contains hate speech. If you want to see the preview, you can continue."}
2022-05-15T07:56:10+00:00
[ "2108.05921" ]
[]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #arxiv-2108.05921 #region-us
# Dataset Card for HatemojiCheck ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Content Warning This datasets contains examples of hateful language. ## Dataset Description and Details - Repository: URL - Paper: URL - Point of Contact: URL@URL ### Dataset Summary HatemojiCheck is a test suite of 3,930 test cases covering seven functionalities of emoji-based hate and six identities. HatemojiCheck contains the text for each test case and its gold-standard label from majority agreement of three annotators. We provide labels by target of hate. HatemojiCheck can be used to evaluate the robustness of hate speech classifiers to constructions of emoji-based hate. ### Supported Tasks Hate Speech Detection ### Languages English ## Dataset Structure ### Data Instances 3,930 test cases ### Data Fields case_id: The unique ID of the test case (assigned to each of the 3,930 cases generated) templ_id: The unique ID of the template (original=.0, identity perturbation=.1, polarity perturbation=.2, emoji perturbation = .3) from which the test case was generated test_grp_id: The ID of the set of templates (original, identity perturbation, polarity perturbation, no emoji perturbation) from which the test case was generated. text: The text of the test case. target: Where applicable, the protected group targeted or referenced by the test case. We cover six protected groups in the test suite: women, trans people, gay people, black people, disabled people and Muslims. functionality: The shorthand for the functionality tested by the test case. set: Whether the test case is an original statement, a identity perturbation, a polarity perturbation or a no emoji perturbation. label_gold: The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label. unrealistic_flags: The number of annotators (/3) who flagged the test case as unrealistic. included_in_test_suite: Indicator for whether test case is included in final HatemojiCheck test suite. All 3,930 test cases are included. ### Data Splits All of HatemojiCheck is designated for testing models so only test is provided. ## Dataset Creation ### Curation Rationale The purpose of HatemojiCheck is to evaluate the performance of black-box models against varied constructions of emoji-based hate. To construct HatemojiCheck, we hand-crafted 3,930 short form English-language texts using a template-based method for group identities and slurs. Each test case exemplifies one functionality and is associated with a binary gold standard label _hateful_ versus _not hateful_. All 3,930 cases were labeled by a trained team of three annotators, who could also flag examples that were unrealistic. Any test cases with multiple disagreements or flags were replaced with alternative templates and re-issued for annotation to improve the quality of examples in the final set of test cases. ### Source Data #### Initial Data Collection and Normalization Based on the literature, we define a list of potentially hateful emoji and words, and use Twitter's Streaming API to search for the Cartesian products of emoji--emoji and emoji--word pairs over a two week period. To identify different forms of emoji-based hate, we apply a grounded theory approach on a sample of 3,295 tweets, splitting out distinctive categories, and recursively selecting sub-categories until all key parts of the data are captured and the framework is 'saturated'. #### Who are the source language producers? All test cases were hand-crafted by the lead author, who is a native English-speaking researcher at a UK university with extensive subject matter expertise in online harms. The test cases are in English. This choice was motivated by the researchers' and annotators' expertise, and to maximize HatemojiCheck's applicability to previous hate speech detection studies, which are predominantly conducted on English-language data. We discuss the limitations of restricting HatemojiCheck to one language and suggest that future work should prioritize expanding the test suite to other languages. ### Annotations #### Annotation process To validate the gold-standard labels assigned to each test case, we recruited three annotators with prior experience on hate speech projects. Annotators were given extensive guidelines, test tasks and training sessions, which included examining real-world examples of emoji-based hate from Twitter. We followed guidance for protecting annotator well-being. There were two iterative rounds of annotation. In the first round, each annotator labeled all 3,930 test cases as hateful or non-hateful, and had the option to flag unrealistic entries. Test cases with any disagreement or unrealistic flags were reviewed by the study authors (n=289). One-on-one interviews were conducted with annotators to identify dataset issues versus annotator error. From 289 test cases, 119 were identified as ambiguous or unrealistic, replaced with alternatives and re-issued to annotators for labeling. No further issues were raised. We measured inter-annotator agreement using Randolph's Kappa, obtaining a value of 0.85 for the final set of test cases, which indicates "almost perfect agreement". #### Who are the annotators? We recruited a team of three annotators who worked for two weeks in May 2021 and were paid £16. All annotators were female and between 30--39 years old. One had an undergraduate degree, one a taught graduate degree and one a post-graduate research degree. There were three nationalities: Argentinian, British and Iraqi, two ethnicities: White and Arab, and three religious affiliations: Catholic, Muslim and None. One annotator was a native English speaker and the others were non-native but fluent. All annotators used emoji and social media more than once per day. All annotators had seen others targeted by abuse online, and one had been targeted personally. ### Personal and Sensitive Information HatemojiCheck contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view. ## Considerations for Using the Data ### Social Impact of Dataset HatemojiCheck contains challenging emoji examples on which commercial solutions and state-of-the-art transformer models have been proven to fail. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to evaluate model's weaknesses to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed. ### Discussion of Biases HatemojiCheck only contains test cases against 6 identities: woman, trans people, gay people, disabled people, Black people and Muslims. It thus is biased towards evaluating hate directed at these targets. Additionally, HatemojiCheck was motivated by an empirical study of English-language tweets. The usage of emoji varies significantly across culture, country and demographic so there may be biases towards Western, English-language use of emoji. ### Other Known Limitations While inspired by real-world instances of emoji-based hate, HatemojiCheck contains synthetic, hand-crafted test cases. These test cases are designed to be a "minimum performance standard" against which to hold models accountable. However, because the test cases are designed to have one "clear, gold-standard label" they may be easier to predict than more nuanced, complex and real-world instances of emoji-based hate. ## Additional Information ### Dataset Curators The dataset was created by the lead author (Hannah Rose Kirk), then validated by the other authors and three annotators. ### Licensing Information Creative Commons Attribution 4.0 International Public License. For full detail see: URL If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921. ### Contributions Thanks to @HannahKirk for adding this dataset.
[ "# Dataset Card for HatemojiCheck", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Content Warning\nThis datasets contains examples of hateful language.", "## Dataset Description and Details\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL@URL", "### Dataset Summary\nHatemojiCheck is a test suite of 3,930 test cases covering seven functionalities of emoji-based hate and six identities.\nHatemojiCheck contains the text for each test case and its gold-standard label from majority agreement of three annotators. We provide labels by target of hate.\nHatemojiCheck can be used to evaluate the robustness of hate speech classifiers to constructions of emoji-based hate.", "### Supported Tasks\n\nHate Speech Detection", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\n3,930 test cases", "### Data Fields\n\ncase_id: The unique ID of the test case (assigned to each of the 3,930 cases generated)\n\ntempl_id: The unique ID of the template (original=.0, identity perturbation=.1, polarity perturbation=.2, emoji perturbation = .3) from which the test case was generated\n\ntest_grp_id: The ID of the set of templates (original, identity perturbation, polarity perturbation, no emoji perturbation) from which the test case was generated.\n\ntext: The text of the test case.\n\ntarget: Where applicable, the protected group targeted or referenced by the test case. We cover six protected groups in the test suite: women, trans people, gay people, black people, disabled people and Muslims.\n\nfunctionality: The shorthand for the functionality tested by the test case.\n\nset: Whether the test case is an original statement, a identity perturbation, a polarity perturbation or a no emoji perturbation.\n\nlabel_gold: The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.\n\nunrealistic_flags: The number of annotators (/3) who flagged the test case as unrealistic.\n\nincluded_in_test_suite: Indicator for whether test case is included in final HatemojiCheck test suite. All 3,930 test cases are included.", "### Data Splits\n\nAll of HatemojiCheck is designated for testing models so only test is provided.", "## Dataset Creation", "### Curation Rationale\n\nThe purpose of HatemojiCheck is to evaluate the performance of black-box models against varied constructions of emoji-based hate. To construct HatemojiCheck, we hand-crafted 3,930 short form English-language texts using a template-based method for group identities and slurs. Each test case exemplifies one functionality and is associated with a binary gold standard label _hateful_ versus _not hateful_. All 3,930 cases were labeled by a trained team of three annotators, who could also flag examples that were unrealistic. Any test cases with multiple disagreements or flags were replaced with alternative templates and re-issued for annotation to improve the quality of examples in the final set of test cases.", "### Source Data", "#### Initial Data Collection and Normalization\n\nBased on the literature, we define a list of potentially hateful emoji and words, and use Twitter's Streaming API to search for the Cartesian products of emoji--emoji and emoji--word pairs over a two week period. To identify different forms of emoji-based hate, we apply a grounded theory approach on a sample of 3,295 tweets, splitting out distinctive categories, and recursively selecting sub-categories until all key parts of the data are captured and the framework is 'saturated'.", "#### Who are the source language producers?\n\nAll test cases were hand-crafted by the lead author, who is a native English-speaking researcher at a UK university with extensive subject matter expertise in online harms. The test cases are in English. This choice was motivated by the researchers' and annotators' expertise, and to maximize HatemojiCheck's applicability to previous hate speech detection studies, which are predominantly conducted on English-language data. We discuss the limitations of restricting HatemojiCheck to one language and suggest that future work should prioritize expanding the test suite to other languages.", "### Annotations", "#### Annotation process\n\nTo validate the gold-standard labels assigned to each test case, we recruited three annotators with prior experience on hate speech projects. Annotators were given extensive guidelines, test tasks and training sessions, which included examining real-world examples of emoji-based hate from Twitter. We followed guidance for protecting annotator well-being. There were two iterative rounds of annotation. In the first round, each annotator labeled all 3,930 test cases as hateful or non-hateful, and had the option to flag unrealistic entries. Test cases with any disagreement or unrealistic flags were reviewed by the study authors (n=289). One-on-one interviews were conducted with annotators to identify dataset issues versus annotator error. From 289 test cases, 119 were identified as ambiguous or unrealistic, replaced with alternatives and re-issued to annotators for labeling. No further issues were raised. We measured inter-annotator agreement using Randolph's Kappa, obtaining a value of 0.85 for the final set of test cases, which indicates \"almost perfect agreement\".", "#### Who are the annotators?\n\nWe recruited a team of three annotators who worked for two weeks in May 2021 and were paid £16. All annotators were female and between 30--39 years old. One had an undergraduate degree, one a taught graduate degree and one a post-graduate research degree. There were three nationalities: Argentinian, British and Iraqi, two ethnicities: White and Arab, and three religious affiliations: Catholic, Muslim and None. One annotator was a native English speaker and the others were non-native but fluent. All annotators used emoji and social media more than once per day. All annotators had seen others targeted by abuse online, and one had been targeted personally.", "### Personal and Sensitive Information\n\nHatemojiCheck contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nHatemojiCheck contains challenging emoji examples on which commercial solutions and state-of-the-art transformer models have been proven to fail. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to evaluate model's weaknesses to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.", "### Discussion of Biases\n\nHatemojiCheck only contains test cases against 6 identities: woman, trans people, gay people, disabled people, Black people and Muslims. It thus is biased towards evaluating hate directed at these targets. Additionally, HatemojiCheck was motivated by an empirical study of English-language tweets. The usage of emoji varies significantly across culture, country and demographic so there may be biases towards Western, English-language use of emoji.", "### Other Known Limitations\n\nWhile inspired by real-world instances of emoji-based hate, HatemojiCheck contains synthetic, hand-crafted test cases. These test cases are designed to be a \"minimum performance standard\" against which to hold models accountable. However, because the test cases are designed to have one \"clear, gold-standard label\" they may be easier to predict than more nuanced, complex and real-world instances of emoji-based hate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was created by the lead author (Hannah Rose Kirk), then validated by the other authors and three annotators.", "### Licensing Information\n\nCreative Commons Attribution 4.0 International Public License. For full detail see: URL\n\n\n\nIf you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.", "### Contributions\n\nThanks to @HannahKirk for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #arxiv-2108.05921 #region-us \n", "# Dataset Card for HatemojiCheck", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Content Warning\nThis datasets contains examples of hateful language.", "## Dataset Description and Details\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL@URL", "### Dataset Summary\nHatemojiCheck is a test suite of 3,930 test cases covering seven functionalities of emoji-based hate and six identities.\nHatemojiCheck contains the text for each test case and its gold-standard label from majority agreement of three annotators. We provide labels by target of hate.\nHatemojiCheck can be used to evaluate the robustness of hate speech classifiers to constructions of emoji-based hate.", "### Supported Tasks\n\nHate Speech Detection", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\n3,930 test cases", "### Data Fields\n\ncase_id: The unique ID of the test case (assigned to each of the 3,930 cases generated)\n\ntempl_id: The unique ID of the template (original=.0, identity perturbation=.1, polarity perturbation=.2, emoji perturbation = .3) from which the test case was generated\n\ntest_grp_id: The ID of the set of templates (original, identity perturbation, polarity perturbation, no emoji perturbation) from which the test case was generated.\n\ntext: The text of the test case.\n\ntarget: Where applicable, the protected group targeted or referenced by the test case. We cover six protected groups in the test suite: women, trans people, gay people, black people, disabled people and Muslims.\n\nfunctionality: The shorthand for the functionality tested by the test case.\n\nset: Whether the test case is an original statement, a identity perturbation, a polarity perturbation or a no emoji perturbation.\n\nlabel_gold: The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.\n\nunrealistic_flags: The number of annotators (/3) who flagged the test case as unrealistic.\n\nincluded_in_test_suite: Indicator for whether test case is included in final HatemojiCheck test suite. All 3,930 test cases are included.", "### Data Splits\n\nAll of HatemojiCheck is designated for testing models so only test is provided.", "## Dataset Creation", "### Curation Rationale\n\nThe purpose of HatemojiCheck is to evaluate the performance of black-box models against varied constructions of emoji-based hate. To construct HatemojiCheck, we hand-crafted 3,930 short form English-language texts using a template-based method for group identities and slurs. Each test case exemplifies one functionality and is associated with a binary gold standard label _hateful_ versus _not hateful_. All 3,930 cases were labeled by a trained team of three annotators, who could also flag examples that were unrealistic. Any test cases with multiple disagreements or flags were replaced with alternative templates and re-issued for annotation to improve the quality of examples in the final set of test cases.", "### Source Data", "#### Initial Data Collection and Normalization\n\nBased on the literature, we define a list of potentially hateful emoji and words, and use Twitter's Streaming API to search for the Cartesian products of emoji--emoji and emoji--word pairs over a two week period. To identify different forms of emoji-based hate, we apply a grounded theory approach on a sample of 3,295 tweets, splitting out distinctive categories, and recursively selecting sub-categories until all key parts of the data are captured and the framework is 'saturated'.", "#### Who are the source language producers?\n\nAll test cases were hand-crafted by the lead author, who is a native English-speaking researcher at a UK university with extensive subject matter expertise in online harms. The test cases are in English. This choice was motivated by the researchers' and annotators' expertise, and to maximize HatemojiCheck's applicability to previous hate speech detection studies, which are predominantly conducted on English-language data. We discuss the limitations of restricting HatemojiCheck to one language and suggest that future work should prioritize expanding the test suite to other languages.", "### Annotations", "#### Annotation process\n\nTo validate the gold-standard labels assigned to each test case, we recruited three annotators with prior experience on hate speech projects. Annotators were given extensive guidelines, test tasks and training sessions, which included examining real-world examples of emoji-based hate from Twitter. We followed guidance for protecting annotator well-being. There were two iterative rounds of annotation. In the first round, each annotator labeled all 3,930 test cases as hateful or non-hateful, and had the option to flag unrealistic entries. Test cases with any disagreement or unrealistic flags were reviewed by the study authors (n=289). One-on-one interviews were conducted with annotators to identify dataset issues versus annotator error. From 289 test cases, 119 were identified as ambiguous or unrealistic, replaced with alternatives and re-issued to annotators for labeling. No further issues were raised. We measured inter-annotator agreement using Randolph's Kappa, obtaining a value of 0.85 for the final set of test cases, which indicates \"almost perfect agreement\".", "#### Who are the annotators?\n\nWe recruited a team of three annotators who worked for two weeks in May 2021 and were paid £16. All annotators were female and between 30--39 years old. One had an undergraduate degree, one a taught graduate degree and one a post-graduate research degree. There were three nationalities: Argentinian, British and Iraqi, two ethnicities: White and Arab, and three religious affiliations: Catholic, Muslim and None. One annotator was a native English speaker and the others were non-native but fluent. All annotators used emoji and social media more than once per day. All annotators had seen others targeted by abuse online, and one had been targeted personally.", "### Personal and Sensitive Information\n\nHatemojiCheck contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nHatemojiCheck contains challenging emoji examples on which commercial solutions and state-of-the-art transformer models have been proven to fail. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to evaluate model's weaknesses to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.", "### Discussion of Biases\n\nHatemojiCheck only contains test cases against 6 identities: woman, trans people, gay people, disabled people, Black people and Muslims. It thus is biased towards evaluating hate directed at these targets. Additionally, HatemojiCheck was motivated by an empirical study of English-language tweets. The usage of emoji varies significantly across culture, country and demographic so there may be biases towards Western, English-language use of emoji.", "### Other Known Limitations\n\nWhile inspired by real-world instances of emoji-based hate, HatemojiCheck contains synthetic, hand-crafted test cases. These test cases are designed to be a \"minimum performance standard\" against which to hold models accountable. However, because the test cases are designed to have one \"clear, gold-standard label\" they may be easier to predict than more nuanced, complex and real-world instances of emoji-based hate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was created by the lead author (Hannah Rose Kirk), then validated by the other authors and three annotators.", "### Licensing Information\n\nCreative Commons Attribution 4.0 International Public License. For full detail see: URL\n\n\n\nIf you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.", "### Contributions\n\nThanks to @HannahKirk for adding this dataset." ]
17509b6678c3206ce7bfa43107fdadf53f3d06f7
# Dataset Card for HatemojiBuild ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Content Warning This datasets contains examples of hateful language. ## Dataset Description and Details - **Repository:** https://github.com/HannahKirk/Hatemoji - **Paper:** https://arxiv.org/abs/2108.05921 - **Point of Contact:** [email protected] ### Dataset Summary HatemojiBuild can be used to train, develop and test models on emoji-based hate with challenging adversarial examples and perturbations. HatemojiBuild is a dataset of 5,912 adversarially-generated examples created on Dynabench using a human-and-model-in-the-loop approach. We collect data in three consecutive rounds. Our work follows on from Vidgen et al (2021) _Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection_ (http://arxiv.org/abs/2012.15761) who collect four rounds of textual adversarial examples. The R1-R4 data is available at https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset. The entries in HatemojiBuild are labeled by round (R5-7). The text of each entry is given with its gold-standard label from majority agreement of three annotators. Each original entry is associated with a perturbation so each row of the dataset. matches these two cases. We also provide granular labels of type and target for hateful entries. ### Supported Tasks Hate Speech Detection ### Languages English ## Dataset Structure ### Data Instances 5,912 adversarially-generated instances ### Data Fields entry_id: The unique ID of the entry (assigned to each of the 5,912 cases generated). text: The text of the entry. type: The type of hate assigned to hateful entries. target: The target of hate assigned to hateful entries. round.base: The round where the entry was generated. round.set: The round and whether the entry came from an original statement (a) or a perturbation (b). set: Whether the entry is an original or perturbation. split: The randomly-assigned train/dev/test split using in our work (80:10:10). label_gold: The gold standard label (hateful/non-hateful) of the test case. matched_text: The text of the paired perturbation. Each original entry has one perturbation. matched_id: The unique entry ID of the paired perturbation. ### Data Splits Train, Validation and Test. ## Dataset Creation ### Curation Rationale The genre of texts is hateful and non-hateful statements using emoji constructions. The purpose of HatemojiBuild is address the model weaknesses to emoji-baaed hate, to "build" better models. 50% of the 5,912 test cases are hateful. 50% of the entries in the dataset are original content and 50% are perturbations. ### Source Data #### Initial Data Collection and Normalization We use an online interface designed for dynamic dataset generation and model benchmarking (Dynabench) to collect synthetic adversarial examples in three successive rounds, running between 24th May--11th June. Each round contains approximately 2,000 entries, where each original entry inputed to the interface is paired with an offline perturbation. Data was synthetically-generated by a team of trained annotators, i.e., not sampled from social media. #### Who are the source language producers? The language producers are also the annotators. ### Annotations #### Annotation process We implemented three successive rounds of data generation and model re-training to create the HatemojiBuild dataset. In each round we tasked a team of 10 trained annotators with entering content the model-in-the-loop would misclassify. We refer to this model as the target model. Annotators were instructed to generate linguistically diverse entries while ensuring each entry was (1) realistic, (2) clearly hateful or non-hateful and (3) contained at least one emoji. Each entry was first given a binary label of hateful or non-hateful, and hateful content was assigned secondary labels for the type and target of hate. Each entry was validated by two additional annotators, and an expert resolved disagreements. After validation, annotators created a perturbation for each entry that flips the label. To maximize similarity between originals and perturbations, annotators could either make an emoji substitution while fixing the text or fix the emoji and minimally change the surrounding text. Each perturbation received two additional annotations, and disagreements were resolved by the expert. This weekly cadence of annotator tasks was repeated in three consecutive weeks. #### Who are the annotators? Ten annotators were recruited to work for three weeks, and paid £16/hour. An expert annotator was recruited for quality control purposes and paid £20/hour. In total, there were 11 annotators. All annotators received a training session prior to data collection and had previous experience working on hate speech projects. A daily `stand-up' meeting was held every morning to communicate feedback and update guidelines as rounds progressed. Annotators were able to contact the research team at any point using a messaging platform. Of 11 annotators, 8 were between 18--29 years old and 3 between 30--39 years old. The completed education level was high school for 3 annotators, undergraduate degree for 1 annotators, taught graduate degree for 4 annotators and post-graduate research degree for 3 annotators. 6 annotators were female, and 5 were male. Annotators came from a variety of nationalities, with 7 British, as well as Jordanian, Irish, Polish and Spanish. 7 annotators identified as ethnically White and the remaining annotators came from various ethnicities including Turkish, Middle Eastern, and Mixed White and South Asian. 4 annotators were Muslim, and others identified as Atheist or as having no religious affiliation. 9 annotators were native English speakers and 2 were non-native but fluent. The majority of annotators (9) used emoji and social media more than once per day. 10 annotators had seen others targeted by abuse online, and 7 had been personally targeted. ### Personal and Sensitive Information HatemojiBuild contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view. ## Considerations for Using the Data ### Social Impact of Dataset HatemojiBuild contains challenging emoji examples which have "tricked" state-of-the-art transformers models. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to build model robustness to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed. ### Discussion of Biases Annotators were given substantial freedom in the targets of hate resulting in 54 unique targets, and 126 unique intersections of these. The entries from R5-R7 contain 1,082 unique emoji out of 3,521 defined in the Unicode Standard as of September 2020. This diversity helped to mitigate biases in classification towards certain targets but biases likely remain, especially since HatemojiBuild was designed for English-language use of emoji. ### Other Known Limitations While annotators were trained on real-world examples of emoji-based hate from Twitter, the entries in HatemojiBuild are synthetically-generated so may deviate from real-world instances of emoji-based hate. ## Additional Information ### Dataset Curators The dataset was curated by the lead author (Hannah Rose Kirk), using the Dynabench platform. ### Licensing Information Creative Commons Attribution 4.0 International Public License. For full detail see: https://github.com/HannahKirk/Hatemoji/blob/main/LICENSE ### Citation Information If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921. ``` @article{kirk2021hatemoji, title={Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate}, author={Kirk, Hannah Rose and Vidgen, Bertram and R{\"o}ttger, Paul and Thrush, Tristan and Hale, Scott A}, journal={arXiv preprint arXiv:2108.05921}, year={2021} } ``` ### Contributions Thanks to [@HannahKirk](https://github.com/HannahKirk) for adding this dataset.
HannahRoseKirk/HatemojiBuild
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:expert", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "license:cc-by-4.0", "arxiv:2108.05921", "arxiv:2012.15761", "region:us" ]
2022-04-13T08:12:14+00:00
{"annotations_creators": ["expert"], "language_creators": ["expert-generated"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "HatemojiBuild", "languages": ["en"], "extra_gated_prompt": "We have deactivated the automatic preview for this dataset because it contains hate speech. If you want to see the preview, you can continue."}
2022-05-15T07:56:35+00:00
[ "2108.05921", "2012.15761" ]
[]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #arxiv-2108.05921 #arxiv-2012.15761 #region-us
# Dataset Card for HatemojiBuild ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Content Warning This datasets contains examples of hateful language. ## Dataset Description and Details - Repository: URL - Paper: URL - Point of Contact: URL@URL ### Dataset Summary HatemojiBuild can be used to train, develop and test models on emoji-based hate with challenging adversarial examples and perturbations. HatemojiBuild is a dataset of 5,912 adversarially-generated examples created on Dynabench using a human-and-model-in-the-loop approach. We collect data in three consecutive rounds. Our work follows on from Vidgen et al (2021) _Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection_ (URL who collect four rounds of textual adversarial examples. The R1-R4 data is available at URL The entries in HatemojiBuild are labeled by round (R5-7). The text of each entry is given with its gold-standard label from majority agreement of three annotators. Each original entry is associated with a perturbation so each row of the dataset. matches these two cases. We also provide granular labels of type and target for hateful entries. ### Supported Tasks Hate Speech Detection ### Languages English ## Dataset Structure ### Data Instances 5,912 adversarially-generated instances ### Data Fields entry_id: The unique ID of the entry (assigned to each of the 5,912 cases generated). text: The text of the entry. type: The type of hate assigned to hateful entries. target: The target of hate assigned to hateful entries. URL: The round where the entry was generated. URL: The round and whether the entry came from an original statement (a) or a perturbation (b). set: Whether the entry is an original or perturbation. split: The randomly-assigned train/dev/test split using in our work (80:10:10). label_gold: The gold standard label (hateful/non-hateful) of the test case. matched_text: The text of the paired perturbation. Each original entry has one perturbation. matched_id: The unique entry ID of the paired perturbation. ### Data Splits Train, Validation and Test. ## Dataset Creation ### Curation Rationale The genre of texts is hateful and non-hateful statements using emoji constructions. The purpose of HatemojiBuild is address the model weaknesses to emoji-baaed hate, to "build" better models. 50% of the 5,912 test cases are hateful. 50% of the entries in the dataset are original content and 50% are perturbations. ### Source Data #### Initial Data Collection and Normalization We use an online interface designed for dynamic dataset generation and model benchmarking (Dynabench) to collect synthetic adversarial examples in three successive rounds, running between 24th May--11th June. Each round contains approximately 2,000 entries, where each original entry inputed to the interface is paired with an offline perturbation. Data was synthetically-generated by a team of trained annotators, i.e., not sampled from social media. #### Who are the source language producers? The language producers are also the annotators. ### Annotations #### Annotation process We implemented three successive rounds of data generation and model re-training to create the HatemojiBuild dataset. In each round we tasked a team of 10 trained annotators with entering content the model-in-the-loop would misclassify. We refer to this model as the target model. Annotators were instructed to generate linguistically diverse entries while ensuring each entry was (1) realistic, (2) clearly hateful or non-hateful and (3) contained at least one emoji. Each entry was first given a binary label of hateful or non-hateful, and hateful content was assigned secondary labels for the type and target of hate. Each entry was validated by two additional annotators, and an expert resolved disagreements. After validation, annotators created a perturbation for each entry that flips the label. To maximize similarity between originals and perturbations, annotators could either make an emoji substitution while fixing the text or fix the emoji and minimally change the surrounding text. Each perturbation received two additional annotations, and disagreements were resolved by the expert. This weekly cadence of annotator tasks was repeated in three consecutive weeks. #### Who are the annotators? Ten annotators were recruited to work for three weeks, and paid £16/hour. An expert annotator was recruited for quality control purposes and paid £20/hour. In total, there were 11 annotators. All annotators received a training session prior to data collection and had previous experience working on hate speech projects. A daily 'stand-up' meeting was held every morning to communicate feedback and update guidelines as rounds progressed. Annotators were able to contact the research team at any point using a messaging platform. Of 11 annotators, 8 were between 18--29 years old and 3 between 30--39 years old. The completed education level was high school for 3 annotators, undergraduate degree for 1 annotators, taught graduate degree for 4 annotators and post-graduate research degree for 3 annotators. 6 annotators were female, and 5 were male. Annotators came from a variety of nationalities, with 7 British, as well as Jordanian, Irish, Polish and Spanish. 7 annotators identified as ethnically White and the remaining annotators came from various ethnicities including Turkish, Middle Eastern, and Mixed White and South Asian. 4 annotators were Muslim, and others identified as Atheist or as having no religious affiliation. 9 annotators were native English speakers and 2 were non-native but fluent. The majority of annotators (9) used emoji and social media more than once per day. 10 annotators had seen others targeted by abuse online, and 7 had been personally targeted. ### Personal and Sensitive Information HatemojiBuild contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view. ## Considerations for Using the Data ### Social Impact of Dataset HatemojiBuild contains challenging emoji examples which have "tricked" state-of-the-art transformers models. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to build model robustness to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed. ### Discussion of Biases Annotators were given substantial freedom in the targets of hate resulting in 54 unique targets, and 126 unique intersections of these. The entries from R5-R7 contain 1,082 unique emoji out of 3,521 defined in the Unicode Standard as of September 2020. This diversity helped to mitigate biases in classification towards certain targets but biases likely remain, especially since HatemojiBuild was designed for English-language use of emoji. ### Other Known Limitations While annotators were trained on real-world examples of emoji-based hate from Twitter, the entries in HatemojiBuild are synthetically-generated so may deviate from real-world instances of emoji-based hate. ## Additional Information ### Dataset Curators The dataset was curated by the lead author (Hannah Rose Kirk), using the Dynabench platform. ### Licensing Information Creative Commons Attribution 4.0 International Public License. For full detail see: URL If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921. ### Contributions Thanks to @HannahKirk for adding this dataset.
[ "# Dataset Card for HatemojiBuild", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Content Warning\nThis datasets contains examples of hateful language.", "## Dataset Description and Details\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL@URL", "### Dataset Summary\nHatemojiBuild can be used to train, develop and test models on emoji-based hate with challenging adversarial examples and perturbations.\nHatemojiBuild is a dataset of 5,912 adversarially-generated examples created on Dynabench using a human-and-model-in-the-loop approach. We collect data in three consecutive rounds. Our work follows on from Vidgen et al (2021) _Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection_ (URL who collect four rounds of textual adversarial examples. The R1-R4 data is available at URL The entries in HatemojiBuild are labeled by round (R5-7). The text of each entry is given with its gold-standard label from majority agreement of three annotators. Each original entry is associated with a perturbation so each row of the dataset. matches these two cases. We also provide granular labels of type and target for hateful entries.", "### Supported Tasks\n\nHate Speech Detection", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\n5,912 adversarially-generated instances", "### Data Fields\n\nentry_id: The unique ID of the entry (assigned to each of the 5,912 cases generated).\n\ntext: The text of the entry.\n\ntype: The type of hate assigned to hateful entries.\n\ntarget: The target of hate assigned to hateful entries.\n\nURL: The round where the entry was generated.\n\nURL: The round and whether the entry came from an original statement (a) or a perturbation (b).\n\nset: Whether the entry is an original or perturbation.\n\nsplit: The randomly-assigned train/dev/test split using in our work (80:10:10).\n\nlabel_gold: The gold standard label (hateful/non-hateful) of the test case.\n\nmatched_text: The text of the paired perturbation. Each original entry has one perturbation.\n\nmatched_id: The unique entry ID of the paired perturbation.", "### Data Splits\n\nTrain, Validation and Test.", "## Dataset Creation", "### Curation Rationale\n\nThe genre of texts is hateful and non-hateful statements using emoji constructions. The purpose of HatemojiBuild is address the model weaknesses to emoji-baaed hate, to \"build\" better models. 50% of the 5,912 test cases are hateful. 50% of the entries in the dataset are original content and 50% are perturbations.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use an online interface designed for dynamic dataset generation and model benchmarking (Dynabench) to collect synthetic adversarial examples in three successive rounds, running between 24th May--11th June. Each round contains approximately 2,000 entries, where each original entry inputed to the interface is paired with an offline perturbation. Data was synthetically-generated by a team of trained annotators, i.e., not sampled from social media.", "#### Who are the source language producers?\n\nThe language producers are also the annotators.", "### Annotations", "#### Annotation process\n\nWe implemented three successive rounds of data generation and model re-training to create the HatemojiBuild dataset. \nIn each round we tasked a team of 10 trained annotators with entering content the model-in-the-loop would misclassify. We refer to this model as the target model. Annotators were instructed to generate linguistically diverse entries while ensuring each entry was (1) realistic, (2) clearly hateful or non-hateful and (3) contained at least one emoji. Each entry was first given a binary label of hateful or non-hateful, and hateful content was assigned secondary labels for the type and target of hate. Each entry was validated by two additional annotators, and an expert resolved disagreements. After validation, annotators created a perturbation for each entry that flips the label. To maximize similarity between originals and perturbations, annotators could either make an emoji substitution while fixing the text or fix the emoji and minimally change the surrounding text. Each perturbation received two additional annotations, and disagreements were resolved by the expert. This weekly cadence of annotator tasks was repeated in three consecutive weeks.", "#### Who are the annotators?\n\nTen annotators were recruited to work for three weeks, and paid £16/hour. An expert annotator was recruited for quality control purposes and paid £20/hour. In total, there were 11 annotators. All annotators received a training session prior to data collection and had previous experience working on hate speech projects. A daily 'stand-up' meeting was held every morning to communicate feedback and update guidelines as rounds progressed. Annotators were able to contact the research team at any point using a messaging platform. Of 11 annotators, 8 were between 18--29 years old and 3 between 30--39 years old. The completed education level was high school for 3 annotators, undergraduate degree for 1 annotators, taught graduate degree for 4 annotators and post-graduate research degree for 3 annotators. 6 annotators were female, and 5 were male. Annotators came from a variety of nationalities, with 7 British, as well as Jordanian, Irish, Polish and Spanish. 7 annotators identified as ethnically White and the remaining annotators came from various ethnicities including Turkish, Middle Eastern, and Mixed White and South Asian. 4 annotators were Muslim, and others identified as Atheist or as having no religious affiliation. 9 annotators were native English speakers and 2 were non-native but fluent. The majority of annotators (9) used emoji and social media more than once per day. 10 annotators had seen others targeted by abuse online, and 7 had been personally targeted.", "### Personal and Sensitive Information\n\nHatemojiBuild contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nHatemojiBuild contains challenging emoji examples which have \"tricked\" state-of-the-art transformers models. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to build model robustness to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.", "### Discussion of Biases\n\nAnnotators were given substantial freedom in the targets of hate resulting in 54 unique targets, and 126 unique intersections of these. The entries from R5-R7 contain 1,082 unique emoji out of 3,521 defined in the Unicode Standard as of September 2020. This diversity helped to mitigate biases in classification towards certain targets but biases likely remain, especially since HatemojiBuild was designed for English-language use of emoji.", "### Other Known Limitations\n\nWhile annotators were trained on real-world examples of emoji-based hate from Twitter, the entries in HatemojiBuild are synthetically-generated so may deviate from real-world instances of emoji-based hate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was curated by the lead author (Hannah Rose Kirk), using the Dynabench platform.", "### Licensing Information\n\nCreative Commons Attribution 4.0 International Public License. For full detail see: URL\n\n\n\nIf you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.", "### Contributions\n\nThanks to @HannahKirk for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #arxiv-2108.05921 #arxiv-2012.15761 #region-us \n", "# Dataset Card for HatemojiBuild", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Content Warning\nThis datasets contains examples of hateful language.", "## Dataset Description and Details\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL@URL", "### Dataset Summary\nHatemojiBuild can be used to train, develop and test models on emoji-based hate with challenging adversarial examples and perturbations.\nHatemojiBuild is a dataset of 5,912 adversarially-generated examples created on Dynabench using a human-and-model-in-the-loop approach. We collect data in three consecutive rounds. Our work follows on from Vidgen et al (2021) _Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection_ (URL who collect four rounds of textual adversarial examples. The R1-R4 data is available at URL The entries in HatemojiBuild are labeled by round (R5-7). The text of each entry is given with its gold-standard label from majority agreement of three annotators. Each original entry is associated with a perturbation so each row of the dataset. matches these two cases. We also provide granular labels of type and target for hateful entries.", "### Supported Tasks\n\nHate Speech Detection", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\n5,912 adversarially-generated instances", "### Data Fields\n\nentry_id: The unique ID of the entry (assigned to each of the 5,912 cases generated).\n\ntext: The text of the entry.\n\ntype: The type of hate assigned to hateful entries.\n\ntarget: The target of hate assigned to hateful entries.\n\nURL: The round where the entry was generated.\n\nURL: The round and whether the entry came from an original statement (a) or a perturbation (b).\n\nset: Whether the entry is an original or perturbation.\n\nsplit: The randomly-assigned train/dev/test split using in our work (80:10:10).\n\nlabel_gold: The gold standard label (hateful/non-hateful) of the test case.\n\nmatched_text: The text of the paired perturbation. Each original entry has one perturbation.\n\nmatched_id: The unique entry ID of the paired perturbation.", "### Data Splits\n\nTrain, Validation and Test.", "## Dataset Creation", "### Curation Rationale\n\nThe genre of texts is hateful and non-hateful statements using emoji constructions. The purpose of HatemojiBuild is address the model weaknesses to emoji-baaed hate, to \"build\" better models. 50% of the 5,912 test cases are hateful. 50% of the entries in the dataset are original content and 50% are perturbations.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use an online interface designed for dynamic dataset generation and model benchmarking (Dynabench) to collect synthetic adversarial examples in three successive rounds, running between 24th May--11th June. Each round contains approximately 2,000 entries, where each original entry inputed to the interface is paired with an offline perturbation. Data was synthetically-generated by a team of trained annotators, i.e., not sampled from social media.", "#### Who are the source language producers?\n\nThe language producers are also the annotators.", "### Annotations", "#### Annotation process\n\nWe implemented three successive rounds of data generation and model re-training to create the HatemojiBuild dataset. \nIn each round we tasked a team of 10 trained annotators with entering content the model-in-the-loop would misclassify. We refer to this model as the target model. Annotators were instructed to generate linguistically diverse entries while ensuring each entry was (1) realistic, (2) clearly hateful or non-hateful and (3) contained at least one emoji. Each entry was first given a binary label of hateful or non-hateful, and hateful content was assigned secondary labels for the type and target of hate. Each entry was validated by two additional annotators, and an expert resolved disagreements. After validation, annotators created a perturbation for each entry that flips the label. To maximize similarity between originals and perturbations, annotators could either make an emoji substitution while fixing the text or fix the emoji and minimally change the surrounding text. Each perturbation received two additional annotations, and disagreements were resolved by the expert. This weekly cadence of annotator tasks was repeated in three consecutive weeks.", "#### Who are the annotators?\n\nTen annotators were recruited to work for three weeks, and paid £16/hour. An expert annotator was recruited for quality control purposes and paid £20/hour. In total, there were 11 annotators. All annotators received a training session prior to data collection and had previous experience working on hate speech projects. A daily 'stand-up' meeting was held every morning to communicate feedback and update guidelines as rounds progressed. Annotators were able to contact the research team at any point using a messaging platform. Of 11 annotators, 8 were between 18--29 years old and 3 between 30--39 years old. The completed education level was high school for 3 annotators, undergraduate degree for 1 annotators, taught graduate degree for 4 annotators and post-graduate research degree for 3 annotators. 6 annotators were female, and 5 were male. Annotators came from a variety of nationalities, with 7 British, as well as Jordanian, Irish, Polish and Spanish. 7 annotators identified as ethnically White and the remaining annotators came from various ethnicities including Turkish, Middle Eastern, and Mixed White and South Asian. 4 annotators were Muslim, and others identified as Atheist or as having no religious affiliation. 9 annotators were native English speakers and 2 were non-native but fluent. The majority of annotators (9) used emoji and social media more than once per day. 10 annotators had seen others targeted by abuse online, and 7 had been personally targeted.", "### Personal and Sensitive Information\n\nHatemojiBuild contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nHatemojiBuild contains challenging emoji examples which have \"tricked\" state-of-the-art transformers models. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to build model robustness to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.", "### Discussion of Biases\n\nAnnotators were given substantial freedom in the targets of hate resulting in 54 unique targets, and 126 unique intersections of these. The entries from R5-R7 contain 1,082 unique emoji out of 3,521 defined in the Unicode Standard as of September 2020. This diversity helped to mitigate biases in classification towards certain targets but biases likely remain, especially since HatemojiBuild was designed for English-language use of emoji.", "### Other Known Limitations\n\nWhile annotators were trained on real-world examples of emoji-based hate from Twitter, the entries in HatemojiBuild are synthetically-generated so may deviate from real-world instances of emoji-based hate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was curated by the lead author (Hannah Rose Kirk), using the Dynabench platform.", "### Licensing Information\n\nCreative Commons Attribution 4.0 International Public License. For full detail see: URL\n\n\n\nIf you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.", "### Contributions\n\nThanks to @HannahKirk for adding this dataset." ]
37d117aedb1c469ebf2adc217dae40ff31a97a23
# hotpotQA-Extended (Annotated) A version of [HotpotQA-Extended](https://huggingface.co/datasets/ghomasHudson/hotpotExtended) with extra annotations about what part of the input contains the answer.
ghomasHudson/hotpotExtendedAno
[ "region:us" ]
2022-04-13T09:55:51+00:00
{}
2022-04-13T10:01:17+00:00
[]
[]
TAGS #region-us
# hotpotQA-Extended (Annotated) A version of HotpotQA-Extended with extra annotations about what part of the input contains the answer.
[ "# hotpotQA-Extended (Annotated)\n\nA version of HotpotQA-Extended with extra annotations about what part of the input contains the answer." ]
[ "TAGS\n#region-us \n", "# hotpotQA-Extended (Annotated)\n\nA version of HotpotQA-Extended with extra annotations about what part of the input contains the answer." ]
3cdedf844922ab40393d46d4c7f81c596e1c6d45
This a subset of "ceyda/smithsonian_butterflies" dataset with additional processing done to train the "ceyda/butterfly_gan" model. The preprocessing includes: - Adding "sim_score" to images with CLIP model using "pretty butterfly","one butterfly","butterfly with open wings","colorful butterfly" - Removing butterflies with the same name(species) - Limiting only to the top 1000 images - Removing the background (doing another sim_scoring after bg removal did visually worse so didn't do it) - Detecting contours - Cropping to the bounding box of the contour with the largest area - Converting back to RGB
huggan/smithsonian_butterflies_subset
[ "region:us" ]
2022-04-13T10:36:00+00:00
{}
2022-04-16T07:02:36+00:00
[]
[]
TAGS #region-us
This a subset of "ceyda/smithsonian_butterflies" dataset with additional processing done to train the "ceyda/butterfly_gan" model. The preprocessing includes: - Adding "sim_score" to images with CLIP model using "pretty butterfly","one butterfly","butterfly with open wings","colorful butterfly" - Removing butterflies with the same name(species) - Limiting only to the top 1000 images - Removing the background (doing another sim_scoring after bg removal did visually worse so didn't do it) - Detecting contours - Cropping to the bounding box of the contour with the largest area - Converting back to RGB
[]
[ "TAGS\n#region-us \n" ]
161fc9fce16d7e942e0ed14046c1e98956437061
[Needs More Information] # Dataset Card for squad_it ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Converted dataset version to be used in Huggingface. Originally created by Croce et al. at 2018, the SQuAD-it The dataset contains more than 60,000 question/answer pairs in Italian derived from the original English SQuAD dataset., in Italian language. Containing 60,000+ in JSON file format. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @InProceedings{10.1007/978-3-030-03840-3_29, author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto", editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo", title="Neural Learning for Question Answering in Italian", booktitle="AI*IA 2018 -- Advances in Artificial Intelligence", year="2018", publisher="Springer International Publishing", address="Cham", pages="389--402", isbn="978-3-030-03840-3" } ```
bullmount/squad_it
[ "region:us" ]
2022-04-14T05:58:53+00:00
{}
2022-04-14T15:06:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for squad_it ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Converted dataset version to be used in Huggingface. Originally created by Croce et al. at 2018, the SQuAD-it The dataset contains more than 60,000 question/answer pairs in Italian derived from the original English SQuAD dataset., in Italian language. Containing 60,000+ in JSON file format. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for squad_it", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\nConverted dataset version to be used in Huggingface.\nOriginally created by Croce et al. at 2018, the SQuAD-it The dataset contains more than 60,000 question/answer pairs in Italian derived from the original English SQuAD dataset., in Italian language. Containing 60,000+ in JSON file format.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "# Dataset Card for squad_it", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\nConverted dataset version to be used in Huggingface.\nOriginally created by Croce et al. at 2018, the SQuAD-it The dataset contains more than 60,000 question/answer pairs in Italian derived from the original English SQuAD dataset., in Italian language. Containing 60,000+ in JSON file format.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
4b51f4dae6b6ad746445f05059d6793c1f6ea988
# KP20k Benchmark Dataset for Keyphrase Generation ## About KP20k is a dataset for benchmarking keyphrase extraction and generation models. The data is composed of 570 809 abstracts and their associated titles from scientific articles. Details about the dataset can be found in the original paper: - Meng et al 2017. [Deep keyphrase Generation](https://aclanthology.org/P17-1054.pdf) Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592 Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper: - Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text. ## Content The dataset is divided into the following three splits: | Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen | | :--------- | ----------: | -----------: | --------: | ----------: | ------: | -------: | | Train | 530 809 | 5.29 | 58.19 | 10.93 | 17.36 | 13.52 | | Test | 20 000 | 5.28 | 58.40 | 10.84 | 17.20 | 13.56 | | Validation | 20 000 | 5.27 | 58.20 | 10.94 | 17.26 | 13.61 | The following data fields are available: - **id**: unique identifier of the document. **NB** There were no ids in the original dataset. The ids were generated using the python module shortuuid (https://pypi.org/project/shortuuid/) - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of the author assigned keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. **NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + abstract).
taln-ls2n/kp20k
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:unknown", "keyphrase-generation", "keyphrase-extraction", "text-mining", "region:us" ]
2022-04-14T08:00:02+00:00
{"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "KP20k", "tags": ["keyphrase-generation", "keyphrase-extraction", "text-mining"]}
2023-09-13T12:15:04+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-unknown #keyphrase-generation #keyphrase-extraction #text-mining #region-us
KP20k Benchmark Dataset for Keyphrase Generation ================================================ About ----- KP20k is a dataset for benchmarking keyphrase extraction and generation models. The data is composed of 570 809 abstracts and their associated titles from scientific articles. Details about the dataset can be found in the original paper: * Meng et al 2017. Deep keyphrase Generation Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592 Reference (indexer-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in the following paper: * Florian Boudin and Ygor Gallina. 2021. Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. Text pre-processing (tokenization) is carried out using spacy (en\_core\_web\_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text. Content ------- The dataset is divided into the following three splits: The following data fields are available: * id: unique identifier of the document. NB There were no ids in the original dataset. The ids were generated using the python module shortuuid (URL * title: title of the document. * abstract: abstract of the document. * keyphrases: list of the author assigned keyphrases. * prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases. NB: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + abstract).
[]
[ "TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-unknown #keyphrase-generation #keyphrase-extraction #text-mining #region-us \n" ]
5eb9bca5c7dc850b2a42df268b78b88190ab2466
# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT # Dataset Card for PET ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Annotation Guidelines](#annotationguidelines) - [Update](#updates) - [Loading data](#loadingdata) ## Dataset Description - **Homepage:** https://pdi.fbk.eu/pet-dataset/ - **Paper:** https://arxiv.org/abs/2203.04860 - **Point of Contact:** [Patrizio Bellan]([email protected]) ### Dataset Summary Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text. ### Supported Tasks and Leaderboards - Token Classification - Named Entity Recognition - Relations Extraction ### Languages English ## Dataset Structure Test set to beanchmark *Business Process Extraction from Text* approaches. ### Data Instances #### Token Classification For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence. Below, an example of data instance. ``` { "document name":"doc-1.1", "sentence-ID":1, "tokens":["Whenever","the","sales","department","receives","an","order",",","a","new","process","instance","is","created","."], "ner-tags":["O","B-Actor","I-Actor","I-Actor","B-Activity","B-Activity Data","I-Activity Data","O","O","O","O","O","O","O","O"] } ``` #### Relations Extraction For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document. Below, an example of data instance. ``` { "document name": "doc-1.1", "tokens": ["A", "small", "company",...], "tokens-IDs": [0, 1, 2, ...], "ner_tags": ["O", "O", "O", ...], "sentence-IDs": [0, 0, 0, ...], "relations": { "source-head-sentence-ID": [1, 1, 1, ...], "source-head-word-ID": [4, 4, 4, ...], "relation-type": ["uses", "flow", "actor recipient", ...], "target-head-sentence-ID": [1, 2, 1,...], "target-head-word-ID": [5, 9, 1, ...] } } ``` ### Data Fields #### Token Classification - *document name*: a string used to represent the name of the document. - *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document. - *tokens*: a list of string representing the words of the sentence - *ner-tags*: a list of string representing the annotation for each word. The allowed **ner-tags** are: - **O**: An O tag indicates that a token belongs to no chunk. - **B-Actor**: This tag indicates the beginning of an *Actor* chunk. - **I-Actor**: This tag indicates that the tag is inside an *Actor* chunk. - **B-Activity**: This tag indicates the beginning of an *Activity* chunk. - **I-Activity**: This tag indicates that the tag is inside an *Activity* chunk. - **B-Activity Data**: This tag indicates the beginning of an *Activity Data* chunk. - **I-Activity Data**: This tag indicates that the tag is inside an *Activity Data* chunk. - **B-Further Specification**: This tag indicates the beginning of a *Further Specification* chunk. - **I-Further Specification**: This tag indicates that the tag is inside a *Further Specification* chunk. - **B-XOR Gateway**: This tag indicates the beginning of a *XOR Gateway* chunk. - **I-XOR Gateway**: This tag indicates that the tag is inside a *XOR Gateway* chunk. - **B-Condition Specification**: This tag indicates the beginning of a *Condition Specification* chunk. - **I-Condition Specification**: This tag indicates that the tag is inside a *Condition Specification* chunk. - **B-AND Gateway**: This tag indicates the beginning of an *AND Gateway* chunk. - **I-AND Gateway**: This tag indicates that the tag is inside an *AND Gateway* chunk. To have a complete explanation of each process element tag please refer to the [research paper](https://arxiv.org/abs/2203.04860) and the [annotation guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf). ### Relations Extraction - *document name*: a string used to represent the name of the document. - *tokens*: a list of string representing the words of the document - *tokens-IDs*: a list of interger representing the word position within a sentence. - *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above) - *sentence-IDs*: a list of interger representing the sentence number for each word of the document. - *relations*:: a list of document relations. - *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity. - *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity. - *relation-type*: a list of relation tags. - *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity. - *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity. For instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position. In the following example, the first relation of the first document is shown: ```python document_1=modelhub_dataset['test'][0] relation = { 'source-head-sentence-ID': document_1['relations']['source-head-sentence-ID'][0], 'source-head-word-ID': document_1['relations']['source-head-word-ID'][0], 'relation-type': document_1['relations']['relation-type'][0], 'target-head-sentence-ID': document_1['relations']['target-head-sentence-ID'][0], 'target-head-word-ID': document_1['relations']['target-head-sentence-ID'][0], } print(relation) ``` the output is: ```python {'relation-type': 'uses', 'source-head-sentence-ID': 1, 'source-head-word-ID': 4, 'target-head-sentence-ID': 1, 'target-head-word-ID': 1} ``` That means: the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1* ### Data Splits The data was not split. It contains the test set only. ## Dataset Creation ### Curation Rationale Although there is a long tradition of work in NLP on extracting entities and relations from text to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. ### Source Data #### Initial Data Collection and Normalization The dataset construction process has been split in five main phases: 1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed. 2. Text Annotation. Each text has been annotated by using the [guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf). The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators. 3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines. 4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in [Hripcsak *et al.*](https://academic.oup.com/jamia/article/12/3/296/812057?login=true). We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation. 5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations. #### Who are the source language producers? English ### Annotations #### Annotation process You can read about the annotation process in the original paper https://arxiv.org/abs/2203.04860 #### Who are the annotators? Expert Annotators ### Personal and Sensitive Information No personal or sensitive information issues. ## Considerations for Using the Data ### Social Impact of Dataset The dataset has no social impact ### Discussion of Biases No bias found in the dataset ### Other Known Limitations The *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper. The *AND Gateway* is the less represented process elements in this dataset. The *Further Specification* process element was the most difficult element to annotate. ## Additional Information ### Dataset Curators - Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy) - Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy) - Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy) - Han van der Aa (University of Mannheim, Mannheim, Germany) - Simone Ponzetto (University of Mannheim, Mannheim, Germany) ### Licensing Information ### Citation Information ``` @inproceedings{DBLP:conf/aiia/BellanGDPA22, author = {Patrizio Bellan and Chiara Ghidini and Mauro Dragoni and Simone Paolo Ponzetto and Han van der Aa}, editor = {Debora Nozza and Lucia C. Passaro and Marco Polignano}, title = {Process Extraction from Natural Language Text: the {PET} Dataset and Annotation Guidelines}, booktitle = {Proceedings of the Sixth Workshop on Natural Language for Artificial Intelligence {(NL4AI} 2022) co-located with 21th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2022), Udine, November 30th, 2022}, series = {{CEUR} Workshop Proceedings}, volume = {3287}, pages = {177--191}, publisher = {CEUR-WS.org}, year = {2022}, url = {https://ceur-ws.org/Vol-3287/paper18.pdf}, timestamp = {Fri, 10 Mar 2023 16:23:01 +0100}, biburl = {https://dblp.org/rec/conf/aiia/BellanGDPA22.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } @inproceedings{DBLP:conf/bpm/BellanADGP22, author = {Patrizio Bellan and Han van der Aa and Mauro Dragoni and Chiara Ghidini and Simone Paolo Ponzetto}, editor = {Cristina Cabanillas and Niels Frederik Garmann{-}Johnsen and Agnes Koschmider}, title = {{PET:} An Annotated Dataset for Process Extraction from Natural Language Text Tasks}, booktitle = {Business Process Management Workshops - {BPM} 2022 International Workshops, M{\"{u}}nster, Germany, September 11-16, 2022, Revised Selected Papers}, series = {Lecture Notes in Business Information Processing}, volume = {460}, pages = {315--321}, publisher = {Springer}, year = {2022}, url = {https://doi.org/10.1007/978-3-031-25383-6\_23}, doi = {10.1007/978-3-031-25383-6\_23}, timestamp = {Tue, 14 Feb 2023 09:47:10 +0100}, biburl = {https://dblp.org/rec/conf/bpm/BellanADGP22.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [Patrizio Bellan](https://pdi.fbk.eu/bellan/) for adding this dataset. #### <a name="updates"></a>Update - v1.0.0: Added token classification task - v1.0.1: Added extraction relation task - v1.1.0: Fixed minor errors, fixed performs relations Version 1.1.0 cab be found [here](https://huggingface.co/datasets/patriziobellan/PETv11) ## <a name="annotationguidelines"></a>Annotation Guidelines ### Inception Schema The inception schema can be found [here](https://pdi.fbk.eu/pet/inception-schema.json) ### Annotation Guidelines The Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded [here](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf) ### Article The article can be downloaded [here]({https://ceur-ws.org/Vol-3287/paper18.pdf}) ### Python Interface A Python interface (beta version) to interact with the dataset can be found [here](https://pypi.org/project/petdatasetreader/) You can find the **BASELINES**, the annotation data, and a graphical interface to visualize predictions [here](https://github.com/patriziobellan86/PETbaselines) ### Benchmarks A Python benchmarking procedure package to test approaches on the PET dataset ca be found [here](https://pypi.org/project/petbenchmarks/) ## <a name="loadingdata"></a>Loading data ### Token-classification task ```python from datasets import load_dataset modelhub_dataset = load_dataset("patriziobellan/PET", name='token-classification') ``` ### Relations-extraction task ```python from datasets import load_dataset modelhub_dataset = load_dataset("patriziobellan/PET", name='relations-extraction') ```
patriziobellan/PET
[ "task_categories:token-classification", "size_categories:n<1K", "language:en", "license:mit", "Business Process Management", "NLP", "ML", "DL", "arxiv:2203.04860", "region:us" ]
2022-04-14T08:35:11+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["token-classification"], "pretty_name": "PET", "tags": ["Business Process Management", "NLP", "ML", "DL"]}
2023-07-05T13:03:24+00:00
[ "2203.04860" ]
[ "en" ]
TAGS #task_categories-token-classification #size_categories-n<1K #language-English #license-mit #Business Process Management #NLP #ML #DL #arxiv-2203.04860 #region-us
# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT # Dataset Card for PET ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions - Annotation Guidelines - Update - Loading data ## Dataset Description - Homepage: URL - Paper: URL - Point of Contact: Patrizio Bellan ### Dataset Summary Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text. ### Supported Tasks and Leaderboards - Token Classification - Named Entity Recognition - Relations Extraction ### Languages English ## Dataset Structure Test set to beanchmark *Business Process Extraction from Text* approaches. ### Data Instances #### Token Classification For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence. Below, an example of data instance. #### Relations Extraction For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document. Below, an example of data instance. ### Data Fields #### Token Classification - *document name*: a string used to represent the name of the document. - *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document. - *tokens*: a list of string representing the words of the sentence - *ner-tags*: a list of string representing the annotation for each word. The allowed ner-tags are: - O: An O tag indicates that a token belongs to no chunk. - B-Actor: This tag indicates the beginning of an *Actor* chunk. - I-Actor: This tag indicates that the tag is inside an *Actor* chunk. - B-Activity: This tag indicates the beginning of an *Activity* chunk. - I-Activity: This tag indicates that the tag is inside an *Activity* chunk. - B-Activity Data: This tag indicates the beginning of an *Activity Data* chunk. - I-Activity Data: This tag indicates that the tag is inside an *Activity Data* chunk. - B-Further Specification: This tag indicates the beginning of a *Further Specification* chunk. - I-Further Specification: This tag indicates that the tag is inside a *Further Specification* chunk. - B-XOR Gateway: This tag indicates the beginning of a *XOR Gateway* chunk. - I-XOR Gateway: This tag indicates that the tag is inside a *XOR Gateway* chunk. - B-Condition Specification: This tag indicates the beginning of a *Condition Specification* chunk. - I-Condition Specification: This tag indicates that the tag is inside a *Condition Specification* chunk. - B-AND Gateway: This tag indicates the beginning of an *AND Gateway* chunk. - I-AND Gateway: This tag indicates that the tag is inside an *AND Gateway* chunk. To have a complete explanation of each process element tag please refer to the research paper and the annotation guidelines. ### Relations Extraction - *document name*: a string used to represent the name of the document. - *tokens*: a list of string representing the words of the document - *tokens-IDs*: a list of interger representing the word position within a sentence. - *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above) - *sentence-IDs*: a list of interger representing the sentence number for each word of the document. - *relations*:: a list of document relations. - *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity. - *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity. - *relation-type*: a list of relation tags. - *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity. - *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity. For instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position. In the following example, the first relation of the first document is shown: the output is: That means: the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1* ### Data Splits The data was not split. It contains the test set only. ## Dataset Creation ### Curation Rationale Although there is a long tradition of work in NLP on extracting entities and relations from text to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. ### Source Data #### Initial Data Collection and Normalization The dataset construction process has been split in five main phases: 1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed. 2. Text Annotation. Each text has been annotated by using the guidelines. The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators. 3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines. 4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in Hripcsak *et al.*. We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation. 5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations. #### Who are the source language producers? English ### Annotations #### Annotation process You can read about the annotation process in the original paper URL #### Who are the annotators? Expert Annotators ### Personal and Sensitive Information No personal or sensitive information issues. ## Considerations for Using the Data ### Social Impact of Dataset The dataset has no social impact ### Discussion of Biases No bias found in the dataset ### Other Known Limitations The *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper. The *AND Gateway* is the less represented process elements in this dataset. The *Further Specification* process element was the most difficult element to annotate. ## Additional Information ### Dataset Curators - Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy) - Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy) - Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy) - Han van der Aa (University of Mannheim, Mannheim, Germany) - Simone Ponzetto (University of Mannheim, Mannheim, Germany) ### Licensing Information ### Contributions Thanks to Patrizio Bellan for adding this dataset. #### <a name="updates"></a>Update - v1.0.0: Added token classification task - v1.0.1: Added extraction relation task - v1.1.0: Fixed minor errors, fixed performs relations Version 1.1.0 cab be found here ## <a name="annotationguidelines"></a>Annotation Guidelines ### Inception Schema The inception schema can be found here ### Annotation Guidelines The Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded here ### Article The article can be downloaded here ### Python Interface A Python interface (beta version) to interact with the dataset can be found here You can find the BASELINES, the annotation data, and a graphical interface to visualize predictions here ### Benchmarks A Python benchmarking procedure package to test approaches on the PET dataset ca be found here ## <a name="loadingdata"></a>Loading data ### Token-classification task ### Relations-extraction task
[ "# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT", "# Dataset Card for PET", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions\n- Annotation Guidelines\n- Update\n- Loading data", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Patrizio Bellan", "### Dataset Summary\n\nAbstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text.", "### Supported Tasks and Leaderboards\n\n- Token Classification\n- Named Entity Recognition\n- Relations Extraction", "### Languages\n\nEnglish", "## Dataset Structure\n\nTest set to beanchmark *Business Process Extraction from Text* approaches.", "### Data Instances", "#### Token Classification\n\nFor each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence.\n\nBelow, an example of data instance.", "#### Relations Extraction\n\n\nFor each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document.\n\nBelow, an example of data instance.", "### Data Fields", "#### Token Classification\n\n- *document name*: a string used to represent the name of the document.\n- *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document.\n- *tokens*: a list of string representing the words of the sentence\n- *ner-tags*: a list of string representing the annotation for each word.\n\nThe allowed ner-tags are: \n - O: An O tag indicates that a token belongs to no chunk.\n - B-Actor: This tag indicates the beginning of an *Actor* chunk.\n - I-Actor: This tag indicates that the tag is inside an *Actor* chunk.\n - B-Activity: This tag indicates the beginning of an *Activity* chunk.\n - I-Activity: This tag indicates that the tag is inside an *Activity* chunk.\n - B-Activity Data: This tag indicates the beginning of an *Activity Data* chunk.\n - I-Activity Data: This tag indicates that the tag is inside an *Activity Data* chunk.\n - B-Further Specification: This tag indicates the beginning of a *Further Specification* chunk.\n - I-Further Specification: This tag indicates that the tag is inside a *Further Specification* chunk.\n - B-XOR Gateway: This tag indicates the beginning of a *XOR Gateway* chunk.\n - I-XOR Gateway: This tag indicates that the tag is inside a *XOR Gateway* chunk.\n - B-Condition Specification: This tag indicates the beginning of a *Condition Specification* chunk.\n - I-Condition Specification: This tag indicates that the tag is inside a *Condition Specification* chunk.\n - B-AND Gateway: This tag indicates the beginning of an *AND Gateway* chunk.\n - I-AND Gateway: This tag indicates that the tag is inside an *AND Gateway* chunk.\n\nTo have a complete explanation of each process element tag please refer to the research paper and the annotation guidelines.", "### Relations Extraction\n- *document name*: a string used to represent the name of the document.\n- *tokens*: a list of string representing the words of the document\n- *tokens-IDs*: a list of interger representing the word position within a sentence.\n- *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above)\n- *sentence-IDs*: a list of interger representing the sentence number for each word of the document.\n- *relations*:: a list of document relations.\n - *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity.\n - *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity.\n - *relation-type*: a list of relation tags.\n - *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity.\n - *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity.\n\nFor instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position.\nIn the following example, the first relation of the first document is shown:\n\nthe output is:\n\nThat means:\n the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1*", "### Data Splits\n\nThe data was not split. It contains the test set only.", "## Dataset Creation", "### Curation Rationale\n\nAlthough there is a long tradition of work in NLP on extracting entities and relations from text to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset construction process has been split in five main phases:\n 1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed.\n \n 2. Text Annotation. Each text has been annotated by using the guidelines. The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators.\n \n 3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines.\n\n 4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in Hripcsak *et al.*. We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation.\n\n 5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations.", "#### Who are the source language producers?\n\nEnglish", "### Annotations", "#### Annotation process\nYou can read about the annotation process in the original paper URL", "#### Who are the annotators?\n\nExpert Annotators", "### Personal and Sensitive Information\n\nNo personal or sensitive information issues.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset has no social impact", "### Discussion of Biases\n\nNo bias found in the dataset", "### Other Known Limitations\n\nThe *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper.\nThe *AND Gateway* is the less represented process elements in this dataset.\nThe *Further Specification* process element was the most difficult element to annotate.", "## Additional Information", "### Dataset Curators\n\n- Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy)\n- Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy)\n- Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy)\n- Han van der Aa (University of Mannheim, Mannheim, Germany)\n- Simone Ponzetto (University of Mannheim, Mannheim, Germany)", "### Licensing Information", "### Contributions\n\nThanks to Patrizio Bellan for adding this dataset.", "#### <a name=\"updates\"></a>Update\n- v1.0.0: Added token classification task\n- v1.0.1: Added extraction relation task\n- v1.1.0: Fixed minor errors, fixed performs relations\n\nVersion 1.1.0 cab be found here", "## <a name=\"annotationguidelines\"></a>Annotation Guidelines", "### Inception Schema\n\nThe inception schema can be found here", "### Annotation Guidelines\n\nThe Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded here", "### Article\n\nThe article can be downloaded here", "### Python Interface\n\nA Python interface (beta version) to interact with the dataset can be found here\n\nYou can find the BASELINES, the annotation data, and a graphical interface to visualize predictions here", "### Benchmarks\n\nA Python benchmarking procedure package to test approaches on the PET dataset ca be found here", "## <a name=\"loadingdata\"></a>Loading data", "### Token-classification task", "### Relations-extraction task" ]
[ "TAGS\n#task_categories-token-classification #size_categories-n<1K #language-English #license-mit #Business Process Management #NLP #ML #DL #arxiv-2203.04860 #region-us \n", "# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT", "# Dataset Card for PET", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions\n- Annotation Guidelines\n- Update\n- Loading data", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Patrizio Bellan", "### Dataset Summary\n\nAbstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text.", "### Supported Tasks and Leaderboards\n\n- Token Classification\n- Named Entity Recognition\n- Relations Extraction", "### Languages\n\nEnglish", "## Dataset Structure\n\nTest set to beanchmark *Business Process Extraction from Text* approaches.", "### Data Instances", "#### Token Classification\n\nFor each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence.\n\nBelow, an example of data instance.", "#### Relations Extraction\n\n\nFor each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document.\n\nBelow, an example of data instance.", "### Data Fields", "#### Token Classification\n\n- *document name*: a string used to represent the name of the document.\n- *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document.\n- *tokens*: a list of string representing the words of the sentence\n- *ner-tags*: a list of string representing the annotation for each word.\n\nThe allowed ner-tags are: \n - O: An O tag indicates that a token belongs to no chunk.\n - B-Actor: This tag indicates the beginning of an *Actor* chunk.\n - I-Actor: This tag indicates that the tag is inside an *Actor* chunk.\n - B-Activity: This tag indicates the beginning of an *Activity* chunk.\n - I-Activity: This tag indicates that the tag is inside an *Activity* chunk.\n - B-Activity Data: This tag indicates the beginning of an *Activity Data* chunk.\n - I-Activity Data: This tag indicates that the tag is inside an *Activity Data* chunk.\n - B-Further Specification: This tag indicates the beginning of a *Further Specification* chunk.\n - I-Further Specification: This tag indicates that the tag is inside a *Further Specification* chunk.\n - B-XOR Gateway: This tag indicates the beginning of a *XOR Gateway* chunk.\n - I-XOR Gateway: This tag indicates that the tag is inside a *XOR Gateway* chunk.\n - B-Condition Specification: This tag indicates the beginning of a *Condition Specification* chunk.\n - I-Condition Specification: This tag indicates that the tag is inside a *Condition Specification* chunk.\n - B-AND Gateway: This tag indicates the beginning of an *AND Gateway* chunk.\n - I-AND Gateway: This tag indicates that the tag is inside an *AND Gateway* chunk.\n\nTo have a complete explanation of each process element tag please refer to the research paper and the annotation guidelines.", "### Relations Extraction\n- *document name*: a string used to represent the name of the document.\n- *tokens*: a list of string representing the words of the document\n- *tokens-IDs*: a list of interger representing the word position within a sentence.\n- *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above)\n- *sentence-IDs*: a list of interger representing the sentence number for each word of the document.\n- *relations*:: a list of document relations.\n - *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity.\n - *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity.\n - *relation-type*: a list of relation tags.\n - *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity.\n - *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity.\n\nFor instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position.\nIn the following example, the first relation of the first document is shown:\n\nthe output is:\n\nThat means:\n the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1*", "### Data Splits\n\nThe data was not split. It contains the test set only.", "## Dataset Creation", "### Curation Rationale\n\nAlthough there is a long tradition of work in NLP on extracting entities and relations from text to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset construction process has been split in five main phases:\n 1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed.\n \n 2. Text Annotation. Each text has been annotated by using the guidelines. The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators.\n \n 3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines.\n\n 4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in Hripcsak *et al.*. We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation.\n\n 5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations.", "#### Who are the source language producers?\n\nEnglish", "### Annotations", "#### Annotation process\nYou can read about the annotation process in the original paper URL", "#### Who are the annotators?\n\nExpert Annotators", "### Personal and Sensitive Information\n\nNo personal or sensitive information issues.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset has no social impact", "### Discussion of Biases\n\nNo bias found in the dataset", "### Other Known Limitations\n\nThe *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper.\nThe *AND Gateway* is the less represented process elements in this dataset.\nThe *Further Specification* process element was the most difficult element to annotate.", "## Additional Information", "### Dataset Curators\n\n- Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy)\n- Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy)\n- Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy)\n- Han van der Aa (University of Mannheim, Mannheim, Germany)\n- Simone Ponzetto (University of Mannheim, Mannheim, Germany)", "### Licensing Information", "### Contributions\n\nThanks to Patrizio Bellan for adding this dataset.", "#### <a name=\"updates\"></a>Update\n- v1.0.0: Added token classification task\n- v1.0.1: Added extraction relation task\n- v1.1.0: Fixed minor errors, fixed performs relations\n\nVersion 1.1.0 cab be found here", "## <a name=\"annotationguidelines\"></a>Annotation Guidelines", "### Inception Schema\n\nThe inception schema can be found here", "### Annotation Guidelines\n\nThe Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded here", "### Article\n\nThe article can be downloaded here", "### Python Interface\n\nA Python interface (beta version) to interact with the dataset can be found here\n\nYou can find the BASELINES, the annotation data, and a graphical interface to visualize predictions here", "### Benchmarks\n\nA Python benchmarking procedure package to test approaches on the PET dataset ca be found here", "## <a name=\"loadingdata\"></a>Loading data", "### Token-classification task", "### Relations-extraction task" ]
c4a3428883440ffabcba3afe9ed7ee94ffd13abb
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available [here](https://opensea.io/collection/hapeprime). Model is available [here](https://huggingface.co/huggingnft/hapeprime). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingnft/hapeprime") ``` ## Dataset Structure [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. - `image`: an `image` feature. - `id`: an `int` feature. - `token_metadata`: a `str` feature. - `image_original_url`: a `str` feature. ### Data Splits [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
huggingnft/hapeprime
[ "license:mit", "huggingnft", "nft", "huggan", "gan", "image", "images", "region:us" ]
2022-04-14T10:40:21+00:00
{"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/hapeprime"]}
2022-04-16T16:59:08+00:00
[]
[]
TAGS #license-mit #huggingnft #nft #huggan #gan #image #images #region-us
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - How to use - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - About ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Point of Contact: ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available here. Model is available here. Check Space: link. ### Supported Tasks and Leaderboards ## How to use How to load this dataset directly with the datasets library: ## Dataset Structure ### Data Fields The data fields are the same among all splits. - 'image': an 'image' feature. - 'id': an 'int' feature. - 'token_metadata': a 'str' feature. - 'image_original_url': a 'str' feature. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#license-mit #huggingnft #nft #huggan #gan #image #images #region-us \n", "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
61faf004bd4a26daa27ec3127bc55e5c60829cbe
# Pages of Early Soviet Performance (PESP) This dataset was created as part of the [Pages of Early Soviet Performance](https://cdh.princeton.edu/projects/pages-early-soviet-performance/) project at Princeton and provides text and image research data from a previously scanned [collection of illustrated periodicals](https://dpul.princeton.edu/slavic/catalog?f%5Breadonly_collections_ssim%5D%5B%5D=Russian+Illustrated+Periodicals) held by Princeton University's Slavic Collections. The project was a partnership with ITMO University in Saint Petersburg. Our work focused on document segmentation and the prediction of image, text, title, and mixedtext regions in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements such as graphics, photographs, and illustrations. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents. For each of the ten journals of interest in Princeton's digital collections (DPUL), we started with the IIIF manifest URI. With these manifests, we downloaded each of the 24,000 document images. The URI for each of the images is included in the dataset and a full list is available in `IIIF_URIs.json`. ## Authors Natalia Ermolaev, Thomas Keenan, Katherine Reischl, Andrew Janco, Quinn Dombrowski, Antonina Puchkovskaia, Alexander Jacobson, Anastasiia Mamonova, Michael Galperin and Vladislav Tretyak ## Journal manifests - [Эрмитаж](https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest) - [Вестник искусств](https://figgy.princeton.edu/concern/scanned_resources/ad256b35-9ad0-4f75-bf83-3bad1a7c6018/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/ad256b35-9ad0-4f75-bf83-3bad1a7c6018/manifest) - [Советский театр](https://figgy.princeton.edu/concern/scanned_resources/f33993bb-a041-40a1-b11f-f660da825583/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/f33993bb-a041-40a1-b11f-f660da825583/manifest) - [Рабис](https://figgy.princeton.edu/concern/scanned_resources/01f4236f-0a2f-473c-946f-d9bbec12f8ea/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/01f4236f-0a2f-473c-946f-d9bbec12f8ea/manifest) - [Даёшь](https://figgy.princeton.edu/concern/scanned_resources/e036a5da-97a8-4041-ad62-a57af44359e2/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/e036a5da-97a8-4041-ad62-a57af44359e2/manifest) - [Персимфанс](https://figgy.princeton.edu/concern/scanned_resources/af43d19a-3659-4dd0-a0fc-4c74ce521ad6/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/af43d19a-3659-4dd0-a0fc-4c74ce521ad6/manifest) - [Тридцать дней](https://figgy.princeton.edu/concern/scanned_resources/d2d488af-2980-4554-a9ef-aacbaf463ec8/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/d2d488af-2980-4554-a9ef-aacbaf463ec8/manifest) - [За пролетарское искусство](https://figgy.princeton.edu/concern/scanned_resources/38f89d57-8e64-4033-97d6-b925c407584a/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/38f89d57-8e64-4033-97d6-b925c407584a/manifest) - [Бригада художников](https://figgy.princeton.edu/concern/scanned_resources/66d00a87-5ea9-439a-a909-95d697401a2b/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/66d00a87-5ea9-439a-a909-95d697401a2b/manifest) - [Зрелища](https://figgy.princeton.edu/concern/scanned_resources/1af8b322-a0b1-46af-8541-5c3054af8098/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/1af8b322-a0b1-46af-8541-5c3054af8098/manifest) ## Model Using [makesense.ai](https://www.makesense.ai/) and a custom active learning application called ["Mayakovsky"](https://github.com/CDH-ITMO-Periodicals-Project/mayakovsky) we generated training data for a [YOLOv5 model](https://docs.ultralytics.com/tutorials/train-custom-datasets/). The model was fine-tuned on the new labels and predictions were generated for all images in the collection. ## OCR Using the model's predictions for image, title, text and mixedtext segments, we cropped the image using the bounding boxes and ran OCR on each document segment using Tesseract, Google Vision, and ABBYY FineReader. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction. ## Dataset The dataset contains an entry for each image with the following fields: - filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page. - dpul: the URL for the image's journal in Digital Princeton University Library - journal: the journal name - year: the year of the journal issue - issue: the issue for the image - URI: the IIIF URI used to fetch the image from Princeton's IIIF server - yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (object-class x y width height). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3. - yolo_predictions: a List with a dictionary for each of the model's predictions with fields for: - label: the predicted label - x: the x-value location of the center point of the prediction - y: the y-value location of the center point of the prediction - w: the total width of the prediction's bounding box - h: the total height of the prediction's bounding box - abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data - tesseract_text: the text extracted from the predicted document segment using Tesseract. - vision_text: the text extracted from the predicted document segment using Google Vision. - vision_labels: entities recognized by Google Vision in image blocks and separated by | (ex. Boating|Boat|Printmaking) # Useage ```python from datasets import load_dataset dataset = load_dataset('ajanco/pesp') for item in dataset['train']: for prediction in item['yolo_predictions']: print(prediction) ```
ajanco/pesp
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "source_datasets:original", "language:ru", "license:afl-3.0", "region:us" ]
2022-04-14T11:18:44+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated", "machine-generated"], "language": ["ru"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "The Pages of Early Soviet Performance (PESP) uses machine learning to generate multiple datasets of early-Soviet illustrated periodicals related to the performing arts. By using computer vision techniques and training a YOLO (You Only Look Once) real-time object detection model, we are producing textual and image data that will facilitate new avenues of research about Soviet culture during the first decades after the October Revolution (1917-1932).\n\nOur starting point is Princeton University Library's Digital PUL (DPUL) where ten titles - totaling 526 issues and approximately 26,000 pages - of Soviet performance journals have been digitized and can be freely viewed online. Journals are a diverse and complex genre: taken together, this collection contains hundreds of thousands of articles, poems, editorial commentary, advertisements as well as images, illustrations and graphic art. Today, researchers can browse the journals and view and download high-quality page images on DPUL."}
2022-07-01T15:18:15+00:00
[]
[ "ru" ]
TAGS #task_categories-other #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #source_datasets-original #language-Russian #license-afl-3.0 #region-us
# Pages of Early Soviet Performance (PESP) This dataset was created as part of the Pages of Early Soviet Performance project at Princeton and provides text and image research data from a previously scanned collection of illustrated periodicals held by Princeton University's Slavic Collections. The project was a partnership with ITMO University in Saint Petersburg. Our work focused on document segmentation and the prediction of image, text, title, and mixedtext regions in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements such as graphics, photographs, and illustrations. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents. For each of the ten journals of interest in Princeton's digital collections (DPUL), we started with the IIIF manifest URI. With these manifests, we downloaded each of the 24,000 document images. The URI for each of the images is included in the dataset and a full list is available in 'IIIF_URIs.json'. ## Authors Natalia Ermolaev, Thomas Keenan, Katherine Reischl, Andrew Janco, Quinn Dombrowski, Antonina Puchkovskaia, Alexander Jacobson, Anastasiia Mamonova, Michael Galperin and Vladislav Tretyak ## Journal manifests - Эрмитаж - Вестник искусств - Советский театр - Рабис - Даёшь - Персимфанс - Тридцать дней - За пролетарское искусство - Бригада художников - Зрелища ## Model Using URL and a custom active learning application called "Mayakovsky" we generated training data for a YOLOv5 model. The model was fine-tuned on the new labels and predictions were generated for all images in the collection. ## OCR Using the model's predictions for image, title, text and mixedtext segments, we cropped the image using the bounding boxes and ran OCR on each document segment using Tesseract, Google Vision, and ABBYY FineReader. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction. ## Dataset The dataset contains an entry for each image with the following fields: - filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page. - dpul: the URL for the image's journal in Digital Princeton University Library - journal: the journal name - year: the year of the journal issue - issue: the issue for the image - URI: the IIIF URI used to fetch the image from Princeton's IIIF server - yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (object-class x y width height). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3. - yolo_predictions: a List with a dictionary for each of the model's predictions with fields for: - label: the predicted label - x: the x-value location of the center point of the prediction - y: the y-value location of the center point of the prediction - w: the total width of the prediction's bounding box - h: the total height of the prediction's bounding box - abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data - tesseract_text: the text extracted from the predicted document segment using Tesseract. - vision_text: the text extracted from the predicted document segment using Google Vision. - vision_labels: entities recognized by Google Vision in image blocks and separated by | (ex. Boating|Boat|Printmaking) # Useage
[ "# Pages of Early Soviet Performance (PESP)\n\nThis dataset was created as part of the Pages of Early Soviet Performance project at Princeton and provides text and image research data from a previously scanned collection of illustrated periodicals held by Princeton University's Slavic Collections. The project was a partnership with ITMO University in Saint Petersburg. Our work focused on document segmentation and the prediction of image, text, title, and mixedtext regions in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements such as graphics, photographs, and illustrations. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents.\n\nFor each of the ten journals of interest in Princeton's digital collections (DPUL), we started with the IIIF manifest URI. With these manifests, we downloaded each of the 24,000 document images. The URI for each of the images is included in the dataset and a full list is available in 'IIIF_URIs.json'.", "## Authors \nNatalia Ermolaev, Thomas Keenan, Katherine Reischl, Andrew Janco, Quinn Dombrowski, Antonina Puchkovskaia, Alexander Jacobson, Anastasiia Mamonova, Michael Galperin and Vladislav Tretyak", "## Journal manifests\n- Эрмитаж\n- Вестник искусств\n- Советский театр\n- Рабис\n- Даёшь\n- Персимфанс\n- Тридцать дней\n- За пролетарское искусство\n- Бригада художников\n- Зрелища", "## Model \n\nUsing URL and a custom active learning application called \"Mayakovsky\" we generated training data for a YOLOv5 model. The model was fine-tuned on the new labels and predictions were generated for all images in the collection.", "## OCR \n\nUsing the model's predictions for image, title, text and mixedtext segments, we cropped the image using the bounding boxes and ran OCR on each document segment using Tesseract, Google Vision, and ABBYY FineReader. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction.", "## Dataset \n\nThe dataset contains an entry for each image with the following fields: \n- filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.\n- dpul: the URL for the image's journal in Digital Princeton University Library\n- journal: the journal name\n- year: the year of the journal issue\n- issue: the issue for the image\n- URI: the IIIF URI used to fetch the image from Princeton's IIIF server \n- yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (object-class x y width height). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.\n- yolo_predictions: a List with a dictionary for each of the model's predictions with fields for: \n - label: the predicted label\n - x: the x-value location of the center point of the prediction \n - y: the y-value location of the center point of the prediction\n - w: the total width of the prediction's bounding box \n - h: the total height of the prediction's bounding box\n - abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data\n - tesseract_text: the text extracted from the predicted document segment using Tesseract.\n - vision_text: the text extracted from the predicted document segment using Google Vision.\n - vision_labels: entities recognized by Google Vision in image blocks and separated by | (ex. Boating|Boat|Printmaking)", "# Useage" ]
[ "TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #source_datasets-original #language-Russian #license-afl-3.0 #region-us \n", "# Pages of Early Soviet Performance (PESP)\n\nThis dataset was created as part of the Pages of Early Soviet Performance project at Princeton and provides text and image research data from a previously scanned collection of illustrated periodicals held by Princeton University's Slavic Collections. The project was a partnership with ITMO University in Saint Petersburg. Our work focused on document segmentation and the prediction of image, text, title, and mixedtext regions in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements such as graphics, photographs, and illustrations. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents.\n\nFor each of the ten journals of interest in Princeton's digital collections (DPUL), we started with the IIIF manifest URI. With these manifests, we downloaded each of the 24,000 document images. The URI for each of the images is included in the dataset and a full list is available in 'IIIF_URIs.json'.", "## Authors \nNatalia Ermolaev, Thomas Keenan, Katherine Reischl, Andrew Janco, Quinn Dombrowski, Antonina Puchkovskaia, Alexander Jacobson, Anastasiia Mamonova, Michael Galperin and Vladislav Tretyak", "## Journal manifests\n- Эрмитаж\n- Вестник искусств\n- Советский театр\n- Рабис\n- Даёшь\n- Персимфанс\n- Тридцать дней\n- За пролетарское искусство\n- Бригада художников\n- Зрелища", "## Model \n\nUsing URL and a custom active learning application called \"Mayakovsky\" we generated training data for a YOLOv5 model. The model was fine-tuned on the new labels and predictions were generated for all images in the collection.", "## OCR \n\nUsing the model's predictions for image, title, text and mixedtext segments, we cropped the image using the bounding boxes and ran OCR on each document segment using Tesseract, Google Vision, and ABBYY FineReader. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction.", "## Dataset \n\nThe dataset contains an entry for each image with the following fields: \n- filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.\n- dpul: the URL for the image's journal in Digital Princeton University Library\n- journal: the journal name\n- year: the year of the journal issue\n- issue: the issue for the image\n- URI: the IIIF URI used to fetch the image from Princeton's IIIF server \n- yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (object-class x y width height). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.\n- yolo_predictions: a List with a dictionary for each of the model's predictions with fields for: \n - label: the predicted label\n - x: the x-value location of the center point of the prediction \n - y: the y-value location of the center point of the prediction\n - w: the total width of the prediction's bounding box \n - h: the total height of the prediction's bounding box\n - abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data\n - tesseract_text: the text extracted from the predicted document segment using Tesseract.\n - vision_text: the text extracted from the predicted document segment using Google Vision.\n - vision_labels: entities recognized by Google Vision in image blocks and separated by | (ex. Boating|Boat|Printmaking)", "# Useage" ]
ce994f5c4a86e3fb2db2f045dbc600575e2e6fb8
# Dataset Card for Conceptual Captions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/) - **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions) - **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/) - **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard - **Point of Contact:** [Conceptual Captions e-mail](mailto:[email protected]) ### Dataset Summary Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dset = load_dataset("conceptual_captions") dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` ### Supported Tasks and Leaderboards - `image-captioning`: This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available [here](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard). Official submission output captions are scored against the reference captions from the hidden test set using [this](https://github.com/tylin/coco-caption) implementation of the CIDEr (primary), ROUGE-L and SPICE metrics. ### Languages All captions are in English. ## Dataset Structure ### Data Instances #### `unlabeled` Each instance in this configuration represents a single image with a caption: ``` { 'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800', 'caption': 'a very typical bus station' } ``` #### `labeled` Each instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores: ``` { 'image_url': 'https://thumb1.shutterstock.com/display_pic_with_logo/261388/223876810/stock-vector-christmas-tree-on-a-black-background-vector-223876810.jpg', 'caption': 'christmas tree on a black background .', 'labels': ['christmas tree', 'christmas decoration', 'font', 'text', 'graphic design', 'illustration','interior design', 'tree', 'christmas eve', 'ornament', 'fir', 'plant', 'pine', 'pine family', 'graphics'], 'MIDs': ['/m/025nd', '/m/05fc9mj', '/m/03gq5hm', '/m/07s6nbt', '/m/03c31', '/m/01kr8f', '/m/0h8nzzj', '/m/07j7r', '/m/014r1s', '/m/05ykl4', '/m/016x4z', '/m/05s2s', '/m/09t57', '/m/01tfm0', '/m/021sdg'], 'confidence_scores': [0.9818305373191833, 0.952756941318512, 0.9227379560470581, 0.8524878621101379, 0.7597672343254089, 0.7493422031402588, 0.7332468628883362, 0.6869218349456787, 0.6552258133888245, 0.6357356309890747, 0.5992692708969116, 0.585474967956543, 0.5222904086112976, 0.5113164782524109, 0.5036579966545105] } ``` ### Data Fields #### `unlabeled` - `image_url`: Static URL for downloading the image associated with the post. - `caption`: Textual description of the image. #### `labeled` - `image_url`: Static URL for downloading the image associated with the post. - `caption`: Textual description of the image. - `labels`: A sequence of machine-generated labels obtained using the [Google Cloud Vision API](https://cloud.google.com/vision). - `MIDs`: A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry. - `confidence_scores`: A sequence of confidence scores denoting how likely the corresponing labels are present on the image. ### Data Splits #### `unlabeled` The basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs. #### `labeled` The labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the `unlabeled` config. ## Dataset Creation ### Curation Rationale From the paper: > In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. ### Source Data #### Initial Data Collection and Normalization From the homepage: >For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable. > >To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters. > >We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through. > >In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage. > >The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR. > >We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent. > >Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”. #### Who are the source language producers? Not specified. ### Annotations #### Annotation process Annotations are extracted jointly with the images using the automatic pipeline. #### Who are the annotators? Not specified. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut. ### Licensing Information The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. ### Citation Information ```bibtex @inproceedings{sharma2018conceptual, title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning}, author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu}, booktitle = {Proceedings of ACL}, year = {2018}, } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) and [@mariosasko](https://github.com/mariosasko) for adding this dataset.
conceptual_captions
[ "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-04-14T12:08:21+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "paperswithcode_id": "conceptual-captions", "pretty_name": "Conceptual Captions", "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 623230370, "num_examples": 3318333}, {"name": "validation", "num_bytes": 2846024, "num_examples": 15840}], "download_size": 0, "dataset_size": 626076394}, {"config_name": "unlabeled", "features": [{"name": "image_url", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 584520156, "num_examples": 3318333}, {"name": "validation", "num_bytes": 2698726, "num_examples": 15840}], "download_size": 567211172, "dataset_size": 587218882}, {"config_name": "labeled", "features": [{"name": "image_url", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "labels", "sequence": "string"}, {"name": "MIDs", "sequence": "string"}, {"name": "confidence_scores", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 1199330856, "num_examples": 2007090}], "download_size": 1282463277, "dataset_size": 1199330856}]}
2024-01-18T09:32:42+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-other #region-us
# Dataset Card for Conceptual Captions ## Table of Contents - Dataset Description - Dataset Summary - Dataset Preprocessing - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: Conceptual Captions homepage - Repository: Conceptual Captions repository - Paper: Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning - Leaderboard: Conceptual Captions leaderboardhttps://URL - Point of Contact: Conceptual Captions e-mail ### Dataset Summary Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ### Supported Tasks and Leaderboards - 'image-captioning': This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available here. Official submission output captions are scored against the reference captions from the hidden test set using this implementation of the CIDEr (primary), ROUGE-L and SPICE metrics. ### Languages All captions are in English. ## Dataset Structure ### Data Instances #### 'unlabeled' Each instance in this configuration represents a single image with a caption: #### 'labeled' Each instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores: ### Data Fields #### 'unlabeled' - 'image_url': Static URL for downloading the image associated with the post. - 'caption': Textual description of the image. #### 'labeled' - 'image_url': Static URL for downloading the image associated with the post. - 'caption': Textual description of the image. - 'labels': A sequence of machine-generated labels obtained using the Google Cloud Vision API. - 'MIDs': A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry. - 'confidence_scores': A sequence of confidence scores denoting how likely the corresponing labels are present on the image. ### Data Splits #### 'unlabeled' The basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs. #### 'labeled' The labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the 'unlabeled' config. ## Dataset Creation ### Curation Rationale From the paper: > In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. ### Source Data #### Initial Data Collection and Normalization From the homepage: >For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable. > >To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters. > >We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through. > >In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage. > >The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR. > >We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent. > >Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”. #### Who are the source language producers? Not specified. ### Annotations #### Annotation process Annotations are extracted jointly with the images using the automatic pipeline. #### Who are the annotators? Not specified. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut. ### Licensing Information The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. ### Contributions Thanks to @abhishekkrthakur and @mariosasko for adding this dataset.
[ "# Dataset Card for Conceptual Captions", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: Conceptual Captions homepage\n- Repository: Conceptual Captions repository\n- Paper: Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning\n- Leaderboard: Conceptual Captions leaderboardhttps://URL\n- Point of Contact: Conceptual Captions e-mail", "### Dataset Summary\n\nConceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.", "### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:", "### Supported Tasks and Leaderboards\n\n- 'image-captioning': This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available here. Official submission output captions are scored against the reference captions from the hidden test set using this implementation of the CIDEr (primary), ROUGE-L and SPICE metrics.", "### Languages\n\nAll captions are in English.", "## Dataset Structure", "### Data Instances", "#### 'unlabeled'\n\nEach instance in this configuration represents a single image with a caption:", "#### 'labeled'\n\nEach instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores:", "### Data Fields", "#### 'unlabeled'\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.", "#### 'labeled'\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.\n- 'labels': A sequence of machine-generated labels obtained using the Google Cloud Vision API.\n- 'MIDs': A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry.\n- 'confidence_scores': A sequence of confidence scores denoting how likely the corresponing labels are present on the image.", "### Data Splits", "#### 'unlabeled'\n\nThe basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs.", "#### 'labeled'\n\nThe labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the 'unlabeled' config.", "## Dataset Creation", "### Curation Rationale\n\nFrom the paper:\n> In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.", "### Source Data", "#### Initial Data Collection and Normalization\n\nFrom the homepage:\n>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.\n>\n>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.\n>\n>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.\n>\n>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.\n>\n>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.\n>\n>We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.\n>\n>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.", "#### Who are the source language producers?\n\nNot specified.", "### Annotations", "#### Annotation process\n\nAnnotations are extracted jointly with the images using the automatic pipeline.", "#### Who are the annotators?\n\nNot specified.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nPiyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.", "### Licensing Information\n\nThe dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.", "### Contributions\n\nThanks to @abhishekkrthakur and @mariosasko for adding this dataset." ]
[ "TAGS\n#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-other #region-us \n", "# Dataset Card for Conceptual Captions", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: Conceptual Captions homepage\n- Repository: Conceptual Captions repository\n- Paper: Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning\n- Leaderboard: Conceptual Captions leaderboardhttps://URL\n- Point of Contact: Conceptual Captions e-mail", "### Dataset Summary\n\nConceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.", "### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:", "### Supported Tasks and Leaderboards\n\n- 'image-captioning': This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available here. Official submission output captions are scored against the reference captions from the hidden test set using this implementation of the CIDEr (primary), ROUGE-L and SPICE metrics.", "### Languages\n\nAll captions are in English.", "## Dataset Structure", "### Data Instances", "#### 'unlabeled'\n\nEach instance in this configuration represents a single image with a caption:", "#### 'labeled'\n\nEach instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores:", "### Data Fields", "#### 'unlabeled'\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.", "#### 'labeled'\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.\n- 'labels': A sequence of machine-generated labels obtained using the Google Cloud Vision API.\n- 'MIDs': A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry.\n- 'confidence_scores': A sequence of confidence scores denoting how likely the corresponing labels are present on the image.", "### Data Splits", "#### 'unlabeled'\n\nThe basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs.", "#### 'labeled'\n\nThe labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the 'unlabeled' config.", "## Dataset Creation", "### Curation Rationale\n\nFrom the paper:\n> In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.", "### Source Data", "#### Initial Data Collection and Normalization\n\nFrom the homepage:\n>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.\n>\n>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.\n>\n>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.\n>\n>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.\n>\n>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.\n>\n>We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.\n>\n>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.", "#### Who are the source language producers?\n\nNot specified.", "### Annotations", "#### Annotation process\n\nAnnotations are extracted jointly with the images using the automatic pipeline.", "#### Who are the annotators?\n\nNot specified.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nPiyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.", "### Licensing Information\n\nThe dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.", "### Contributions\n\nThanks to @abhishekkrthakur and @mariosasko for adding this dataset." ]
9c3742c2e077f17b9e6544910cfc4e23ae81db9d
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available [here](https://opensea.io/collection/cryptoskulls). Model is available [here](https://huggingface.co/huggingnft/cryptoskulls). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingnft/cryptoskulls") ``` ## Dataset Structure [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. - `image`: an `image` feature. - `id`: an `int` feature. - `token_metadata`: a `str` feature. - `image_original_url`: a `str` feature. ### Data Splits [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
huggingnft/cryptoskulls
[ "license:mit", "huggingnft", "nft", "huggan", "gan", "image", "images", "region:us" ]
2022-04-14T13:16:34+00:00
{"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cryptoskulls"]}
2022-04-16T16:59:08+00:00
[]
[]
TAGS #license-mit #huggingnft #nft #huggan #gan #image #images #region-us
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - How to use - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - About ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Point of Contact: ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available here. Model is available here. Check Space: link. ### Supported Tasks and Leaderboards ## How to use How to load this dataset directly with the datasets library: ## Dataset Structure ### Data Fields The data fields are the same among all splits. - 'image': an 'image' feature. - 'id': an 'int' feature. - 'token_metadata': a 'str' feature. - 'image_original_url': a 'str' feature. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#license-mit #huggingnft #nft #huggan #gan #image #images #region-us \n", "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
e57e10cf1f793a854486c7dc040ac18d59595199
# Dataset for project: kor_hate_eval(APEACH) ![](https://github.com/jason9693/APEACH/raw/master/resource/dist_topics.png) ## Sample Code <a href="https://colab.research.google.com/drive/1djd0fuoMYIaf7VCHaLQIziJi4_yBJruP#scrollTo=VPR24ysr5Q7k"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="base"/></a> ## Dataset Descritpion Korean Hate Speech Evaluation Datasets : trained with [BEEP!](https://huggingface.co/datasets/kor_hate) and evaluate with [APEACH](https://github.com/jason9693/APEACH) - **Repository: [Korean HateSpeech Evaluation Dataset](https://github.com/jason9693/APEACH)** - **Paper: [APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets](https://arxiv.org/abs/2202.12459)** - **Point of Contact: [Kichang Yang]([email protected])** ### Languages ko-KR ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json {'text': ['(현재 호텔주인 심정) 아18 난 마른하늘에 날벼락맞고 호텔망하게생겼는데 누군 계속 추모받네....', '....한국적인 미인의 대표적인 분...너무나 곱고아름다운모습...그모습뒤의 슬픔을 미처 알지못했네요ㅠ'], 'class': ['Spoiled', 'Default']} ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "class": "ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train (binarized BEEP!) | 7896 | | valid (APEACH) | 3770 | ## Citation ``` @article{yang2022apeach, title={APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets}, author={Yang, Kichang and Jang, Wonjun and Cho, Won Ik}, journal={arXiv preprint arXiv:2202.12459}, year={2022} } ```
jason9693/APEACH
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:crowd-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ko", "license:cc-by-sa-4.0", "arxiv:2202.12459", "region:us" ]
2022-04-14T13:27:43+00:00
{"annotations_creators": ["crowdsourced", "crowd-generated"], "language_creators": ["found"], "language": ["ko"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["binary-classification"], "paperswithcode_id": "apeach", "pretty_name": "APEACH"}
2022-07-05T03:18:07+00:00
[ "2202.12459" ]
[ "ko" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-crowd-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Korean #license-cc-by-sa-4.0 #arxiv-2202.12459 #region-us
Dataset for project: kor\_hate\_eval(APEACH) ============================================ ![](URL Sample Code ----------- <a href="URL src="URL alt="base"/> Dataset Descritpion ------------------- Korean Hate Speech Evaluation Datasets : trained with BEEP! and evaluate with APEACH * Repository: Korean HateSpeech Evaluation Dataset * Paper: APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets * Point of Contact: Kichang Yang ### Languages ko-KR Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nko-KR\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-crowd-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Korean #license-cc-by-sa-4.0 #arxiv-2202.12459 #region-us \n", "### Languages\n\n\nko-KR\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
716a41b2ec2e921f24b9b1df564a07f8643989a4
# AutoTrain Dataset for project: kor_hate_eval ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project kor_hate_eval. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "(\ud604\uc7ac \ud638\ud154\uc8fc\uc778 \uc2ec\uc815) \uc54418 \ub09c \ub9c8\ub978\ud558\ub298\uc5d0 \ub0a0\ubcbc\ub77d\ub9de\uace0 \ud638\ud154\ub9dd\ud558\uac8c\uc0dd\uacbc\ub294\ub370 \ub204\uad70 \uacc4\uc18d \ucd94\ubaa8\ubc1b\ub124....", "target": 1 }, { "text": "....\ud55c\uad6d\uc801\uc778 \ubbf8\uc778\uc758 \ub300\ud45c\uc801\uc778 \ubd84...\ub108\ubb34\ub098 \uacf1\uace0\uc544\ub984\ub2e4\uc6b4\ubaa8\uc2b5...\uadf8\ubaa8\uc2b5\ub4a4\uc758 \uc2ac\ud514\uc744 \ubbf8\ucc98 \uc54c\uc9c0\ubabb\ud588\ub124\uc694\u3160", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 7896 | | valid | 3770 |
jason9693/autotrain-data-kor_hate_eval
[ "task_categories:text-classification", "region:us" ]
2022-04-14T14:42:28+00:00
{"task_categories": ["text-classification"]}
2022-04-14T14:44:07+00:00
[]
[]
TAGS #task_categories-text-classification #region-us
AutoTrain Dataset for project: kor\_hate\_eval ============================================== Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project kor\_hate\_eval. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
ab7ad9330ec63b9652e8f091c76dfe4c549ba606
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available [here](https://opensea.io/collection/azuki). Model is available [here](https://huggingface.co/huggingnft/azuki). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingnft/azuki") ``` ## Dataset Structure [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. - `image`: an `image` feature. - `id`: an `int` feature. - `token_metadata`: a `str` feature. - `image_original_url`: a `str` feature. ### Data Splits [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
huggingnft/azuki
[ "license:mit", "huggingnft", "nft", "huggan", "gan", "image", "images", "region:us" ]
2022-04-14T19:36:39+00:00
{"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/azuki"]}
2022-04-16T16:59:08+00:00
[]
[]
TAGS #license-mit #huggingnft #nft #huggan #gan #image #images #region-us
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - How to use - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - About ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Point of Contact: ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available here. Model is available here. Check Space: link. ### Supported Tasks and Leaderboards ## How to use How to load this dataset directly with the datasets library: ## Dataset Structure ### Data Fields The data fields are the same among all splits. - 'image': an 'image' feature. - 'id': an 'int' feature. - 'token_metadata': a 'str' feature. - 'image_original_url': a 'str' feature. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#license-mit #huggingnft #nft #huggan #gan #image #images #region-us \n", "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
8495090e5604bf7070abe230cb59090e24ab25ae
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available [here](https://opensea.io/collection/mutant-ape-yacht-club). Model is available [here](https://huggingface.co/huggingnft/mutant-ape-yacht-club). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingnft/mutant-ape-yacht-club") ``` ## Dataset Structure [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. - `image`: an `image` feature. - `id`: an `int` feature. - `token_metadata`: a `str` feature. - `image_original_url`: a `str` feature. ### Data Splits [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
huggingnft/mutant-ape-yacht-club
[ "license:mit", "huggingnft", "nft", "huggan", "gan", "image", "images", "region:us" ]
2022-04-14T19:51:06+00:00
{"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/mutant-ape-yacht-club"]}
2022-04-16T16:59:08+00:00
[]
[]
TAGS #license-mit #huggingnft #nft #huggan #gan #image #images #region-us
# Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - How to use - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - About ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Point of Contact: ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available here. Model is available here. Check Space: link. ### Supported Tasks and Leaderboards ## How to use How to load this dataset directly with the datasets library: ## Dataset Structure ### Data Fields The data fields are the same among all splits. - 'image': an 'image' feature. - 'id': an 'int' feature. - 'token_metadata': a 'str' feature. - 'image_original_url': a 'str' feature. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#license-mit #huggingnft #nft #huggan #gan #image #images #region-us \n", "# Dataset Card", "## Disclaimer\n\nAll rights belong to their owners.\nModels and datasets can be removed from the site at the request of the copyright holder.", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- How to use\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n- About", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nNFT images dataset for unconditional generation.\n\nNFT collection available here.\n\nModel is available here.\n\nCheck Space: link.", "### Supported Tasks and Leaderboards", "## How to use\n\nHow to load this dataset directly with the datasets library:", "## Dataset Structure", "### Data Fields\n\nThe data fields are the same among all splits.\n\n- 'image': an 'image' feature.\n- 'id': an 'int' feature.\n- 'token_metadata': a 'str' feature.\n- 'image_original_url': a 'str' feature.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
dbb125403842b8924d864f09f1c0eb357bd57435
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
XiangPan/waimai_10k
[ "region:us" ]
2022-04-14T21:14:23+00:00
{}
2022-04-14T21:38:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
777a739ac615ec9c7015c36c86234533a5d50af2
# Configuration
awacke1/data.csv
[ "region:us" ]
2022-04-14T21:58:15+00:00
{"title": "Data.csv", "emoji": "\ud83d\udc28", "colorFrom": "pink", "colorTo": "gray", "sdk": "gradio", "sdk_version": "2.4.2", "app_file": "app.py", "pinned": false}
2023-09-22T12:12:17+00:00
[]
[]
TAGS #region-us
# Configuration
[ "# Configuration" ]
[ "TAGS\n#region-us \n", "# Configuration" ]
966a1e77847f3e22de5fb961665331767082d571
[Needs More Information] # Dataset Card for squad_it_exp ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Additional Information](#additional-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary SQuAD-it-exp is a dataset derived from the SQuAD-it dataset originally created by Croce et al. to 2018.<br/> SQuAD-it-exp has been enriched by adding new unanswerable questions in SQuAD v2 format.<br/> The dataset contains nearly 90,000 pairs of questions/answers in Italian. ### Languages The dataset is for the ITALIAN language ### Citation Information ``` @InProceedings{10.1007/978-3-030-03840-3_29, author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto", editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo", title="Neural Learning for Question Answering in Italian", booktitle="AI*IA 2018 -- Advances in Artificial Intelligence", year="2018", publisher="Springer International Publishing", address="Cham", pages="389--402", isbn="978-3-030-03840-3" } ```
bullmount/squad-it-exp
[ "region:us" ]
2022-04-15T04:03:58+00:00
{}
2022-04-17T17:30:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for squad_it_exp ## Table of Contents - Dataset Description - Dataset Summary - Languages - Additional Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary SQuAD-it-exp is a dataset derived from the SQuAD-it dataset originally created by Croce et al. to 2018.<br/> SQuAD-it-exp has been enriched by adding new unanswerable questions in SQuAD v2 format.<br/> The dataset contains nearly 90,000 pairs of questions/answers in Italian. ### Languages The dataset is for the ITALIAN language
[ "# Dataset Card for squad_it_exp", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\nSQuAD-it-exp is a dataset derived from the SQuAD-it dataset originally created by Croce et al. to 2018.<br/>\nSQuAD-it-exp has been enriched by adding new unanswerable questions in SQuAD v2 format.<br/>\nThe dataset contains nearly 90,000 pairs of questions/answers in Italian.", "### Languages\nThe dataset is for the ITALIAN language" ]
[ "TAGS\n#region-us \n", "# Dataset Card for squad_it_exp", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\nSQuAD-it-exp is a dataset derived from the SQuAD-it dataset originally created by Croce et al. to 2018.<br/>\nSQuAD-it-exp has been enriched by adding new unanswerable questions in SQuAD v2 format.<br/>\nThe dataset contains nearly 90,000 pairs of questions/answers in Italian.", "### Languages\nThe dataset is for the ITALIAN language" ]
4942ad98569e62b710c547f39f916724088ef520
### Dataset Summary This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate. The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence.
mwong/fever-claim-related
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_fever", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-15T06:04:59+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_fever"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "fever", "pretty_name": "fever"}
2022-10-25T09:06:56+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Fever dataset (URL pre-processed and ready to train and evaluate. The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence.
[ "### Dataset Summary\nThis dataset is extracted from Climate Fever dataset (URL pre-processed and ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Fever dataset (URL pre-processed and ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence." ]
4c366c1882d27123f4aa640b824a29998f1c642d
### Dataset Summary This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and, ready to train and evaluate. The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence.
mwong/climate-claim-related
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_fever", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-15T06:09:18+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_fever"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "climate-fever", "pretty_name": "climate-fever"}
2022-10-25T09:06:59+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Fever dataset (URL pre-processed and, ready to train and evaluate. The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence.
[ "### Dataset Summary\nThis dataset is extracted from Climate Fever dataset (URL pre-processed and, ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_fever #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Fever dataset (URL pre-processed and, ready to train and evaluate.\nThe training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence." ]
3d6d4da7cc2b491448d172eebf397560fdded10a
The reddit_keywords.tsv file contains about 170k single word embeddings (scraped from reddit, filtering from an initial set of ~700k based on a minimum occurrence threshold) in this format: ```tsv temporary -0.276235,-0.181357,-0.325729,0.129826,0.016490,-0.230246,-0.039997,-0.990187,-0.014679,-0.044081,-0.120046,-0.250614,-0.303871,-0.264685,-0.010019,-0.158764,0.086107,-0.018172,0.003005,-0.383161,0.412182,0.104374,0.041335,-0.018206,0.085453,0.016297,-0.015680,0.047611,-0.267469,0.046825,-0.367247,-0.020667,-0.348124,0.055445,-0.303014,0.087954,0.077361,-0.052910,0.404438,-0.107339,-0.027286,-0.174772,0.287671,0.118175,0.224158,0.210142,0.071295,0.052860,0.235766,-0.140977,-0.355314,-0.421407,0.076506,-0.050502,0.334099,-0.090490,-0.109730,0.517465,0.057345,0.322140,0.217463,-0.218778,0.200798,0.140536,0.160337,-0.302322,-0.098611,-0.100849,-0.171952,-0.333828,0.143839,-0.010286,0.103448,0.046543,-0.094578,-0.083335,0.216615,-0.185091,0.028321,-0.251232,-0.021522,0.135202,-0.059559,0.513552,-0.156604,-0.426751,0.029338,-0.086346,-0.001045,-0.210324,-0.196247,-0.127054,-1.732658,0.172654,0.064660,0.051606,0.393296,0.132444,0.068706,-0.264383,0.083144,0.357062,0.501775,0.099174,-0.179929,-0.031447,0.077417,0.141482,-0.302417,0.160296,0.484913,0.070273,0.117609,-0.024784,0.086234,-0.164586,-0.211837,0.243161,0.118945,0.051511,0.225772,-0.207831,-0.132836,0.096240,-0.443813,-0.347750,0.192331,0.119417,-0.067559,-0.208074,-0.117854,0.078054,0.401030,6.348532,-0.012304,-0.099742,-0.065778,-0.299336,-0.164993,-0.089712,0.153861,0.244722,0.138961,0.231054,-0.296617,-0.129511,-0.021327,-0.005316,-0.187050,-0.073289,0.019646,0.458080,-0.027326,0.283158,0.137897,-0.196312,0.023471,0.342747,0.109227,-0.137838,-0.008336,-0.212090,-0.277437,-0.088123,-0.150103,0.030977,0.094198,-0.086804,0.260256,0.036756,0.118120,0.409172,-0.174826,0.454344,-0.333416,0.069056,-0.143509,-0.263730,0.016844,-0.069509,0.240573,0.104100,-0.138059,-0.037173,-0.189750,0.015344,0.034381,-0.243249,-0.052328,-0.111057,0.015412,-0.114713,-0.321371,-0.207981,0.037036,0.103251,-0.011858,-0.289237,0.111561,-0.170033,-0.178935,-0.072297,-0.042672,0.190604,0.174237,-0.095280,0.302311,0.024456,0.038216,-0.223006,0.372462,0.323767,0.078378,-0.297173,-0.195620,0.417219,-0.187052,-0.542408,-0.134892,-0.226160,-0.530608,-0.161821,0.120570,0.010190,0.011004,0.218169,0.322732,0.095584,0.424685,0.293537,-0.191970,0.038989,0.042194,-0.388086,0.496116,0.204738,-0.145585,0.463766,-0.227611,0.127603,-0.074332,-0.199442,-0.055274,-0.042825,-0.120296,0.017672,-0.450518,-0.314901,-0.045003,0.031523,-0.079665,-0.315374,0.305340,0.004655,-0.083071,0.191413,0.043845,-0.213311,0.129284,-0.218377,-0.282955,-0.066901,-0.068339,0.002564,-0.146045,0.056669,0.186583,-0.048750,-0.072946,-0.071184,-0.202749,-0.217035,0.276314,-0.282127,0.128067,0.097095,-0.246900,0.232340,-0.238046,-0.304384,0.067498,0.018847,-0.058201,-0.283596,-0.215553,0.035647,0.096342,-0.175125,0.026618,-0.319932,0.423662,-0.063089,0.251738,0.073425,-0.242309,-0.272967,-0.218592,-0.050702,0.091938,0.026258,0.141810,0.014719,-0.415617,0.102258,0.323665,0.213101,-0.219119,-0.074313,-0.075735,0.031039,-0.085159,0.187972,6.345324,0.043324,-0.220423,0.052132,-0.001249,-0.114997,0.001450,0.004655,0.365987,0.536724,0.394376,0.003819,0.262951,-0.065768,-0.087903,0.027754,-0.069572,-2.503358,0.097163,0.222208,0.032130,0.004387,0.129158,-0.238117,0.168215,-0.196026,-0.092511,-0.095957,0.519996,0.053166,-0.138281,-0.071842,-0.024337,-0.182440,0.207966,0.262904,0.325529,-0.087270,0.199483,-0.098656,0.097615,0.014249,-0.074579,0.351518,0.094744,0.148318,-0.173189,0.033593,0.027609,-0.045624,0.188491,0.203499,0.229421,0.050809,-0.222414,-0.016397,0.086318,0.116249,-0.242203,0.120892,0.042388,0.372276,-0.049954,-0.338517,-0.180879,0.083117,-0.284963,-0.178325,0.079176,0.019744,-0.023706,0.391955,-0.189259,-0.373736,0.149015,0.502598,-0.498027,-0.154271,-0.093499,-0.015292,-0.554516,0.355195,0.013390,0.475157,-0.366012,-0.138618,-0.045420,0.528353,0.134862,0.025135,0.141193,-0.075705,-0.265913,-0.227393,0.319143,-0.135606,-0.055334,-0.265537,0.124943,-0.176613,0.301410,0.243831,-0.190008,0.130851,0.057539,0.044628,0.205449,0.315888,-0.097760,-0.251490,-0.039288,-0.009690,-0.013857,0.292198,-0.114490,0.058920,0.032257,0.197568,-0.117429,-0.049549,-0.274646,-0.097156,-0.057420,0.261883,0.105485,-0.131978,-0.083086,0.492079,0.056150,0.163082,0.052169,-0.258462,0.164738,-0.121904,-0.349110,-0.399021,0.109116,0.108278,-0.102895,0.075380,0.120979,0.164346,-0.173332,0.038970,0.239190,0.404884,0.202795,0.021855,0.014958,0.220877,0.214221,-0.309071,0.157248,-0.182312,-0.069097,-0.271037,0.178052,-0.173829,0.410394,-0.023872,-0.118251,0.140042,-0.055087,0.269867,0.401690,0.251227,0.097262,0.225146,0.180279,-0.679833,0.014100,0.017635,-0.020673,0.288165,-0.162649,0.272822,0.118945,-0.178165,0.105399,0.076920,0.289865,0.479189,-0.379978,-0.074296,0.221087,0.110328,-0.434901,-0.009920,-0.329799,-0.326210,0.121444,-0.399424,0.131924,0.035093,-0.330143,-0.332781,-0.375134,-0.429944,-0.028793,-0.084496 permanent -0.125035,-0.234378,0.011184,0.196125,-0.178078,-0.278433,-0.169808,-0.477378,-0.091331,0.051704,0.052124,-0.342429,0.236901,-0.503706,-0.054427,0.378874,0.356929,0.098530,0.213484,-0.350122,0.476689,0.349297,-0.421352,0.131538,-0.037294,0.242601,0.110521,0.297674,-0.003884,-0.164057,-0.181568,-0.114656,-0.022335,-0.058460,-0.392774,0.592076,-0.037568,-0.093719,0.273190,0.031433,-0.276135,-0.129429,0.202552,0.247301,0.162464,0.331153,0.150925,0.103975,0.040481,-0.308759,-0.468749,-0.118056,-0.177642,0.071796,0.019445,-0.051476,0.051152,0.208523,0.207935,0.215263,0.240936,-0.260006,0.273524,-0.102152,0.086342,-0.583079,0.104273,0.052269,-0.079865,-0.353752,-0.042390,0.052536,0.373398,-0.083875,-0.085006,-0.094790,0.209163,0.116218,-0.000282,0.063966,-0.142604,0.170597,-0.014974,0.339414,-0.459107,-0.563759,0.073553,0.011647,0.132144,0.024776,-0.104373,-0.136440,-1.464302,0.560471,0.167517,0.387043,0.013425,0.354265,-0.273501,-0.138256,0.346923,0.277063,0.132669,-0.100053,-0.031133,-0.137729,-0.038392,0.127757,0.201051,0.122387,-0.091108,0.112959,-0.076981,-0.091213,0.259445,-0.250712,-0.086296,0.077766,-0.400991,-0.061569,0.295548,-0.546704,-0.181826,-0.145557,-0.003189,-0.065816,0.313023,-0.340320,-0.232408,0.108998,0.259111,0.151180,-0.166929,6.414488,0.501402,-0.091578,-0.057641,-0.482665,-0.142667,-0.264874,0.361437,0.394330,-0.229426,-0.091375,-0.243107,0.303489,-0.005123,-0.055163,0.015856,0.069838,0.031935,0.278514,0.166143,0.474343,0.105431,-0.076213,0.039309,-0.111546,0.012941,-0.164336,-0.017733,-0.281277,-0.086701,-0.025275,0.286478,-0.012244,0.419024,-0.218707,0.303495,-0.144674,-0.015870,0.211324,-0.125048,0.148710,-0.164560,0.090908,-0.009281,-0.350103,0.044986,0.231121,0.168193,-0.172223,-0.155072,-0.071494,-0.171293,0.057206,0.509076,-0.468795,-0.048402,-0.062685,-0.073230,-0.009878,-0.075013,-0.291539,0.082641,0.120893,-0.036523,-0.371523,-0.089427,-0.135797,0.039259,-0.154754,-0.454964,0.118403,0.345686,0.308087,0.189306,-0.186566,-0.052662,0.092485,0.443999,0.371476,0.544698,0.163462,0.211605,0.028551,-0.331050,-0.118373,-0.130023,-0.238063,-0.194845,-0.147683,0.269614,0.094254,0.080196,-0.016950,0.226205,0.251216,0.029845,0.241027,0.037793,0.250348,0.178878,-0.370625,0.021588,0.053517,0.089717,0.034888,-0.127252,-0.095143,-0.048264,-0.038292,-0.114234,0.081980,-0.344073,-0.199645,0.133945,0.046776,-0.164213,-0.125509,-0.078354,0.015241,0.525474,0.109172,-0.063231,-0.204491,-0.021912,-0.035685,-0.036702,-0.021009,-0.296292,-0.110856,-0.017419,-0.346117,0.123624,-0.022428,0.178189,-0.263868,-0.301899,-0.151516,0.189889,-0.468874,0.149208,-0.343711,-0.091530,0.159136,-0.205789,0.289440,0.010746,-0.458909,-0.003668,-0.154943,0.190147,-0.072661,-0.098433,-0.260481,-0.013620,0.239844,0.175296,0.013196,0.417082,0.388816,0.610390,0.218911,-0.285315,-0.397401,-0.407900,0.112988,-0.276133,-0.189056,0.077117,-0.106753,-0.315161,0.237132,0.145833,0.157616,-0.081629,-0.078093,0.011940,0.147423,-0.169398,0.207446,6.412528,0.098466,-0.073285,0.456039,-0.219336,0.225516,-0.126300,-0.085544,0.067576,0.480005,0.323118,-0.062557,0.029795,0.159936,0.207522,0.061824,0.081886,-2.257158,-0.030088,0.212595,0.034217,0.134851,0.060425,0.273878,0.141417,-0.372521,-0.256853,-0.449594,0.111124,-0.035848,0.246770,0.013314,0.095374,0.004721,0.067856,0.068077,0.466395,-0.224380,0.224336,-0.260102,-0.119188,-0.059211,-0.037218,0.257120,0.285553,-0.044297,0.090169,0.235599,0.122039,-0.496953,-0.158066,-0.082501,-0.067778,-0.129525,0.083023,0.095774,-0.055067,0.059558,-0.207958,-0.003060,-0.127451,0.016735,0.315940,0.089926,0.034565,0.117776,-0.771444,0.075713,0.224506,-0.118574,0.011695,-0.278467,0.103532,-0.355093,0.072645,0.554889,-0.227020,-0.206997,-0.082026,0.034534,-0.053694,-0.206349,0.212462,0.297205,-0.001969,-0.115018,-0.185391,0.439835,0.206251,0.275428,0.382456,0.143425,-0.285471,-0.118383,-0.042975,-0.127418,-0.086245,-0.013727,-0.001033,0.160316,0.266595,-0.023524,-0.233376,-0.004141,-0.194787,0.362530,0.329915,0.154557,-0.059378,-0.262286,-0.268349,0.006456,0.362081,-0.316577,-0.308787,-0.025319,-0.024902,0.180332,-0.309457,0.196798,-0.184453,0.050009,-0.142146,-0.352558,-0.272899,0.163149,-0.057184,0.129606,0.054385,0.049695,-0.017398,-0.508930,-0.402540,0.189464,-0.468858,-0.395760,0.130050,-0.095393,-0.081481,-0.148284,0.425590,0.208711,-0.175284,-0.026740,0.072050,0.488482,0.123898,0.185273,0.119057,-0.220553,0.085528,0.105237,0.326145,0.150759,0.081640,-0.403034,0.053875,-0.100664,0.156698,-0.125445,0.020690,0.407489,0.089345,0.201447,0.087196,0.215535,-0.358267,0.663917,0.102616,-0.911252,0.242669,0.106915,0.214653,0.051177,0.009364,-0.139282,-0.122035,0.074249,0.219813,-0.034759,0.089622,0.741162,0.248126,-0.458298,0.276372,-0.191041,-0.380611,0.273156,-0.160504,0.010753,-0.103899,0.379443,0.501672,-0.108174,-0.292099,0.073850,-0.201425,-0.711728,0.321330,0.047050 ```
rocca/clip-keyphrase-embeddings
[ "license:apache-2.0", "region:us" ]
2022-04-15T06:25:59+00:00
{"license": "apache-2.0"}
2022-04-15T07:44:51+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
The reddit_keywords.tsv file contains about 170k single word embeddings (scraped from reddit, filtering from an initial set of ~700k based on a minimum occurrence threshold) in this format:
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
cac1d6c26bc0c5266661019473ac0ffb33bcf9bc
# Dataset Card for C4 ## Table of Contents - [Dataset Card for C4](#dataset-card-for-c4) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/allenai/c4 - **Paper:** https://arxiv.org/abs/1910.10683 ### Dataset Summary A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4 It comes in four variants: - `en`: 305GB in JSON format - `en.noblocklist`: 380GB in JSON format - `en.noclean`: 2.3TB in JSON format - `realnewslike`: 15GB in JSON format The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words. ### Supported Tasks and Leaderboards C4 is mainly intended to pretrain language models and word representations. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances An example form the `en` config is: ``` { 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/', 'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z' } ``` ### Data Fields The data have several fields: - `url`: url of the source as a string - `text`: text content as a string - `timestamp`: timestamp as a string ### Data Splits | name | train |validation| |----------------|--------:|---------:| | en |364868892| 364608| | en.noblocklist |393391519| 393226| | en.noclean | ?| ?| | realnewslike | 13799838| 13863| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets. The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Citation Information ``` @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` ### Contributions Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
Peihao/test-dateset
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100M<n<1B", "source_datasets:original", "language:en", "license:odc-by", "arxiv:1910.10683", "region:us" ]
2022-04-15T06:45:58+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["odc-by"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "c4", "pretty_name": "C4"}
2022-10-25T09:08:29+00:00
[ "1910.10683" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-odc-by #arxiv-1910.10683 #region-us
Dataset Card for C4 =================== Table of Contents ----------------- * Dataset Card for C4 + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL ### Dataset Summary A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "URL". This is the version prepared by AllenAI, hosted at this address: URL It comes in four variants: * 'en': 305GB in JSON format * 'en.noblocklist': 380GB in JSON format * 'en.noclean': 2.3TB in JSON format * 'realnewslike': 15GB in JSON format The 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at URL ### Supported Tasks and Leaderboards C4 is mainly intended to pretrain language models and word representations. ### Languages The dataset is in English. Dataset Structure ----------------- ### Data Instances An example form the 'en' config is: ### Data Fields The data have several fields: * 'url': url of the source as a string * 'text': text content as a string * 'timestamp': timestamp as a string ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in URL by Tensorflow Datasets. The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Contributions Thanks to @dirkgr and @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nA colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: \"URL\".\n\n\nThis is the version prepared by AllenAI, hosted at this address: URL\n\n\nIt comes in four variants:\n\n\n* 'en': 305GB in JSON format\n* 'en.noblocklist': 380GB in JSON format\n* 'en.noclean': 2.3TB in JSON format\n* 'realnewslike': 15GB in JSON format\n\n\nThe 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called \"badwords filter\", which removes all documents that contain words from the lists at URL", "### Supported Tasks and Leaderboards\n\n\nC4 is mainly intended to pretrain language models and word representations.", "### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example form the 'en' config is:", "### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp as a string", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nC4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in URL by Tensorflow Datasets.\n\n\nThe dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to @dirkgr and @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-odc-by #arxiv-1910.10683 #region-us \n", "### Dataset Summary\n\n\nA colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: \"URL\".\n\n\nThis is the version prepared by AllenAI, hosted at this address: URL\n\n\nIt comes in four variants:\n\n\n* 'en': 305GB in JSON format\n* 'en.noblocklist': 380GB in JSON format\n* 'en.noclean': 2.3TB in JSON format\n* 'realnewslike': 15GB in JSON format\n\n\nThe 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called \"badwords filter\", which removes all documents that contain words from the lists at URL", "### Supported Tasks and Leaderboards\n\n\nC4 is mainly intended to pretrain language models and word representations.", "### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example form the 'en' config is:", "### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp as a string", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nC4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in URL by Tensorflow Datasets.\n\n\nThe dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to @dirkgr and @lhoestq for adding this dataset." ]
56a5a8b0efca4529b0891c6a3cdfb35c2309dfc4
# Dataset Card for Conceptual 12M ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m) - **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981) - **Point of Contact:** [Conceptual Captions e-mail](mailto:[email protected]) ### Dataset Summary Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training. Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M). ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dset = load_dataset("conceptual_12m") dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` ### Supported Tasks and Leaderboards - `image-captioning`: This dataset can be used to train model for the Image Captioning task. ### Languages All captions are in English. ## Dataset Structure ### Data Instances Each instance represents a single image with a caption: ``` { 'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800', 'caption': 'a very typical bus station' } ``` ### Data Fields - `image_url`: Static URL for downloading the image associated with the post. - `caption`: Textual description of the image. ### Data Splits There is only training data, with a total of 12423374 rows ## Dataset Creation ### Curation Rationale Conceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps. ### Source Data #### Initial Data Collection and Normalization From the paper: > To arrive at CC12M, we keep the image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2. We still keep only JPEG images with size greater than 400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text between 3 and 256 words in the alt-text. We still discard candidates with no noun or no determiner, but permit ones without prepositions. We discard the heuristics regarding high unique-word ratio covering various POS tags and word capitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the above relaxations, the threshold for counting a word type as rare is increased from 5 to 20 > The main motivation for CC3M to perform text transformation is that a majority of candidate captions contain ultrafine-grained entities such as proper names (people, venues, locations, etc.), making it extremely difficult to learn as part of the image captioning task. In contrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability. We thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy of the individuals in these images. For this step, we use the Google Cloud Natural Language APIs to detect all named entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M are transformed in this fashion. #### Who are the source language producers? Not specified. ### Annotations #### Annotation process Annotations are extracted jointly with the images using the automatic pipeline. #### Who are the annotators? Not specified. ### Personal and Sensitive Information From the paper: > The only exception to the “keep alt-texts as raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy of the individuals in these images. For this step, we use the Google Cloud Natural Language APIs to detect all named entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M are transformed in this fashion. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut. ### Licensing Information The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. ### Citation Information ```bibtex @inproceedings{changpinyo2021cc12m, title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts}, author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu}, booktitle = {CVPR}, year = {2021}, } ``` ### Contributions Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
conceptual_12m
[ "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:other", "arxiv:2102.08981", "region:us" ]
2022-04-15T07:06:58+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "paperswithcode_id": "cc12m", "pretty_name": "Conceptual 12M", "dataset_info": {"features": [{"name": "image_url", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2794168030, "num_examples": 12423374}], "download_size": 2707204412, "dataset_size": 2794168030}}
2024-01-18T09:31:48+00:00
[ "2102.08981" ]
[ "en" ]
TAGS #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #arxiv-2102.08981 #region-us
# Dataset Card for Conceptual 12M ## Table of Contents - Dataset Description - Dataset Summary - Dataset Preprocessing - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Repository: Conceptual 12M repository - Paper: Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts - Point of Contact: Conceptual Captions e-mail ### Dataset Summary Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training. Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M). ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ### Supported Tasks and Leaderboards - 'image-captioning': This dataset can be used to train model for the Image Captioning task. ### Languages All captions are in English. ## Dataset Structure ### Data Instances Each instance represents a single image with a caption: ### Data Fields - 'image_url': Static URL for downloading the image associated with the post. - 'caption': Textual description of the image. ### Data Splits There is only training data, with a total of 12423374 rows ## Dataset Creation ### Curation Rationale Conceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps. ### Source Data #### Initial Data Collection and Normalization From the paper: > To arrive at CC12M, we keep the image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2. We still keep only JPEG images with size greater than 400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text between 3 and 256 words in the alt-text. We still discard candidates with no noun or no determiner, but permit ones without prepositions. We discard the heuristics regarding high unique-word ratio covering various POS tags and word capitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the above relaxations, the threshold for counting a word type as rare is increased from 5 to 20 > The main motivation for CC3M to perform text transformation is that a majority of candidate captions contain ultrafine-grained entities such as proper names (people, venues, locations, etc.), making it extremely difficult to learn as part of the image captioning task. In contrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability. We thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy of the individuals in these images. For this step, we use the Google Cloud Natural Language APIs to detect all named entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M are transformed in this fashion. #### Who are the source language producers? Not specified. ### Annotations #### Annotation process Annotations are extracted jointly with the images using the automatic pipeline. #### Who are the annotators? Not specified. ### Personal and Sensitive Information From the paper: > The only exception to the “keep alt-texts as raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy of the individuals in these images. For this step, we use the Google Cloud Natural Language APIs to detect all named entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M are transformed in this fashion. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut. ### Licensing Information The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. ### Contributions Thanks to @thomasw21 for adding this dataset.
[ "# Dataset Card for Conceptual 12M", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: Conceptual 12M repository\n- Paper: Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts\n- Point of Contact: Conceptual Captions e-mail", "### Dataset Summary\n\nConceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.\nIts data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).", "### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:", "### Supported Tasks and Leaderboards\n\n- 'image-captioning': This dataset can be used to train model for the Image Captioning task.", "### Languages\n\nAll captions are in English.", "## Dataset Structure", "### Data Instances\n\nEach instance represents a single image with a caption:", "### Data Fields\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.", "### Data Splits\n\nThere is only training data, with a total of 12423374 rows", "## Dataset Creation", "### Curation Rationale\n\nConceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps.", "### Source Data", "#### Initial Data Collection and Normalization\n\nFrom the paper:\n> To arrive at CC12M, we keep\nthe image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2. \nWe still keep only JPEG images with size greater than\n400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text\nbetween 3 and 256 words in the alt-text. We still discard\ncandidates with no noun or no determiner, but permit ones\nwithout prepositions. We discard the heuristics regarding\nhigh unique-word ratio covering various POS tags and word\ncapitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the\nabove relaxations, the threshold for counting a word type as\nrare is increased from 5 to 20\n\n> The main motivation for CC3M to\nperform text transformation is that a majority of candidate\ncaptions contain ultrafine-grained entities such as proper\nnames (people, venues, locations, etc.), making it extremely\ndifficult to learn as part of the image captioning task. In\ncontrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability.\nWe thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as\nraw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy\nof the individuals in these images. For this step, we use the\nGoogle Cloud Natural Language APIs to detect all named\nentities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M\nare transformed in this fashion.", "#### Who are the source language producers?\n\nNot specified.", "### Annotations", "#### Annotation process\n\nAnnotations are extracted jointly with the images using the automatic pipeline.", "#### Who are the annotators?\n\nNot specified.", "### Personal and Sensitive Information\n\nFrom the paper:\n\n> The only exception to the “keep alt-texts as\nraw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy\nof the individuals in these images. For this step, we use the\nGoogle Cloud Natural Language APIs to detect all named\nentities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M\nare transformed in this fashion.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nSoravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.", "### Licensing Information\n\nThe dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.", "### Contributions\n\nThanks to @thomasw21 for adding this dataset." ]
[ "TAGS\n#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #arxiv-2102.08981 #region-us \n", "# Dataset Card for Conceptual 12M", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: Conceptual 12M repository\n- Paper: Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts\n- Point of Contact: Conceptual Captions e-mail", "### Dataset Summary\n\nConceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.\nIts data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).", "### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:", "### Supported Tasks and Leaderboards\n\n- 'image-captioning': This dataset can be used to train model for the Image Captioning task.", "### Languages\n\nAll captions are in English.", "## Dataset Structure", "### Data Instances\n\nEach instance represents a single image with a caption:", "### Data Fields\n\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'caption': Textual description of the image.", "### Data Splits\n\nThere is only training data, with a total of 12423374 rows", "## Dataset Creation", "### Curation Rationale\n\nConceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps.", "### Source Data", "#### Initial Data Collection and Normalization\n\nFrom the paper:\n> To arrive at CC12M, we keep\nthe image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2. \nWe still keep only JPEG images with size greater than\n400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text\nbetween 3 and 256 words in the alt-text. We still discard\ncandidates with no noun or no determiner, but permit ones\nwithout prepositions. We discard the heuristics regarding\nhigh unique-word ratio covering various POS tags and word\ncapitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the\nabove relaxations, the threshold for counting a word type as\nrare is increased from 5 to 20\n\n> The main motivation for CC3M to\nperform text transformation is that a majority of candidate\ncaptions contain ultrafine-grained entities such as proper\nnames (people, venues, locations, etc.), making it extremely\ndifficult to learn as part of the image captioning task. In\ncontrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability.\nWe thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as\nraw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy\nof the individuals in these images. For this step, we use the\nGoogle Cloud Natural Language APIs to detect all named\nentities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M\nare transformed in this fashion.", "#### Who are the source language producers?\n\nNot specified.", "### Annotations", "#### Annotation process\n\nAnnotations are extracted jointly with the images using the automatic pipeline.", "#### Who are the annotators?\n\nNot specified.", "### Personal and Sensitive Information\n\nFrom the paper:\n\n> The only exception to the “keep alt-texts as\nraw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy\nof the individuals in these images. For this step, we use the\nGoogle Cloud Natural Language APIs to detect all named\nentities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M\nare transformed in this fashion.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nSoravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.", "### Licensing Information\n\nThe dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.", "### Contributions\n\nThanks to @thomasw21 for adding this dataset." ]
35d54d53495778c09cabd7f86019cac79e578aed
FFHQ 70000张png图片 链接:https://pan.baidu.com/s/1XDfTKWOhtwAAQQJ0KBU4RQ 提取码:bowj ## Flickr-Faces-HQ Dataset (FFHQ) ![Python 3.6](https://img.shields.io/badge/python-3.6-green.svg?style=plastic) ![License CC](https://img.shields.io/badge/license-CC-green.svg?style=plastic) ![Format PNG](https://img.shields.io/badge/format-PNG-green.svg?style=plastic) ![Resolution 1024&times;1024](https://img.shields.io/badge/resolution-1024&times;1024-green.svg?style=plastic) ![Images 70000](https://img.shields.io/badge/images-70,000-green.svg?style=plastic) ![Teaser image](./ffhq-teaser.png) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN): > **A Style-Based Generator Architecture for Generative Adversarial Networks**<br> > Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)<br> > http://stylegan.xyz/paper The dataset consists of 70,000 high-quality PNG images at 1024&times;1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from [Flickr](https://www.flickr.com/), thus inheriting all the biases of that website, and automatically aligned and cropped using [dlib](http://dlib.net/). Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally [Amazon Mechanical Turk](https://www.mturk.com/) was used to remove the occasional statues, paintings, or photos of photos. For business inquiries, please contact [[email protected]](mailto:[email protected]) For press and other inquiries, please contact Hector Marinez at [[email protected]](mailto:[email protected]) ## Licenses The individual images were published in Flickr by their respective authors under either [Creative Commons BY 2.0](https://creativecommons.org/licenses/by/2.0/), [Creative Commons BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/), [Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/), [Public Domain CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/), or [U.S. Government Works](http://www.usa.gov/copyright.shtml) license. All of these licenses allow **free use, redistribution, and adaptation for non-commercial purposes**. However, some of them require giving **appropriate credit** to the original author, as well as **indicating any changes** that were made to the images. The license and original author of each image are indicated in the metadata. * [https://creativecommons.org/licenses/by/2.0/](https://creativecommons.org/licenses/by/2.0/) * [https://creativecommons.org/licenses/by-nc/2.0/](https://creativecommons.org/licenses/by-nc/2.0/) * [https://creativecommons.org/publicdomain/mark/1.0/](https://creativecommons.org/publicdomain/mark/1.0/) * [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/) * [http://www.usa.gov/copyright.shtml](http://www.usa.gov/copyright.shtml) The dataset itself (including JSON metadata, download script, and documentation) is made available under [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license by NVIDIA Corporation. You can **use, redistribute, and adapt it for non-commercial purposes**, as long as you (a) give appropriate credit by **citing our paper**, (b) **indicate any changes** that you've made, and (c) distribute any derivative works **under the same license**. * [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by-nc-sa/4.0/) ## Overview All data is hosted on Google Drive: | Path | Size | Files | Format | Description | :--- | :--: | ----: | :----: | :---------- | [ffhq-dataset](https://drive.google.com/open?id=1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP) | 2.56 TB | 210,014 | | Main folder | &boxvr;&nbsp; [ffhq-dataset-v1.json](https://drive.google.com/open?id=1IB0BFbN_eRZx9UkJqLHSgJiQhqX-PrI6) | 254 MB | 1 | JSON | Metadata including copyright info, URLs, etc. | &boxvr;&nbsp; [images1024x1024](https://drive.google.com/open?id=1u3Hbfn3Q6jsTlte3BY85CGwId77H-OOu) | 89.1 GB | 70,000 | PNG | Aligned and cropped images at 1024&times;1024 | &boxvr;&nbsp; [thumbnails128x128](https://drive.google.com/open?id=1uJkWCpLUM-BnXW3H_IgVMdfENeNDFNmC) | 1.95 GB | 70,000 | PNG | Thumbnails at 128&times;128 | &boxvr;&nbsp; [in-the-wild-images](https://drive.google.com/open?id=1YyuocbwILsHAjTusSUG-_zL343jlVBhf) | 955 GB | 70,000 | PNG | Original images from Flickr | &boxvr;&nbsp; [tfrecords](https://drive.google.com/open?id=1LTBpJ0W_WLjqza3zdayligS8Dh1V1gA6) | 273 GB | 9 | tfrecords | Multi-resolution data for [StyleGAN](http://stylegan.xyz/code) and [ProGAN](https://github.com/tkarras/progressive_growing_of_gans) | &boxur;&nbsp; [zips](https://drive.google.com/open?id=1WocxvZ4GEZ1DI8dOz30aSj2zT6pkATYS) | 1.28 TB | 4 | ZIP | Contents of each folder as a ZIP archive. High-level statistics: ![Pie charts](./ffhq-piecharts.png) For use cases that require separate training and validation sets, we have appointed the first 60,000 images to be used for training and the remaining 10,000 for validation. In the [StyleGAN paper](http://stylegan.xyz/paper), however, we used all 70,000 images for training. We have explicitly made sure that there are no duplicate images in the dataset itself. However, please note that the `in-the-wild` folder may contain multiple copies of the same image in cases where we extracted several different faces from the same image. ## Download script You can either grab the data directly from Google Drive or use the provided [download script](./download_ffhq.py). The script makes things considerably easier by automatically downloading all the requested files, verifying their checksums, retrying each file several times on error, and employing multiple concurrent connections to maximize bandwidth. ``` > python download_ffhq.py -h usage: download_ffhq.py [-h] [-j] [-s] [-i] [-t] [-w] [-r] [-a] [--num_threads NUM] [--status_delay SEC] [--timing_window LEN] [--chunk_size KB] [--num_attempts NUM] Download Flickr-Face-HQ (FFHQ) dataset to current working directory. optional arguments: -h, --help show this help message and exit -j, --json download metadata as JSON (254 MB) -s, --stats print statistics about the dataset -i, --images download 1024x1024 images as PNG (89.1 GB) -t, --thumbs download 128x128 thumbnails as PNG (1.95 GB) -w, --wilds download in-the-wild images as PNG (955 GB) -r, --tfrecords download multi-resolution TFRecords (273 GB) -a, --align recreate 1024x1024 images from in-the-wild images --num_threads NUM number of concurrent download threads (default: 32) --status_delay SEC time between download status prints (default: 0.2) --timing_window LEN samples for estimating download eta (default: 50) --chunk_size KB chunk size for each download thread (default: 128) --num_attempts NUM number of download attempts per file (default: 10) ``` ``` > python ..\download_ffhq.py --json --images Downloading JSON metadata... \ 100.00% done 1/1 files 0.25/0.25 GB 43.21 MB/s ETA: done Parsing JSON metadata... Downloading 70000 files... | 100.00% done 70000/70000 files 89.19 GB/89.19 GB 59.87 MB/s ETA: done ``` The script also serves as a reference implementation of the automated scheme that we used to align and crop the images. Once you have downloaded the in-the-wild images with `python download_ffhq.py --wilds`, you can run `python download_ffhq.py --align` to reproduce exact replicas of the aligned 1024&times;1024 images using the facial landmark locations included in the metadata. ## Metadata The `ffhq-dataset-v1.json` file contains the following information for each image in a machine-readable format: ``` { "0": { # Image index "category": "training", # Training or validation "metadata": { # Info about the original Flickr photo: "photo_url": "https://www.flickr.com/photos/...", # - Flickr URL "photo_title": "DSCF0899.JPG", # - File name "author": "Jeremy Frumkin", # - Author "country": "", # - Country where the photo was taken "license": "Attribution-NonCommercial License", # - License name "license_url": "https://creativecommons.org/...", # - License detail URL "date_uploaded": "2007-08-16", # - Date when the photo was uploaded to Flickr "date_crawled": "2018-10-10" # - Date when the photo was crawled from Flickr }, "image": { # Info about the aligned 1024x1024 image: "file_url": "https://drive.google.com/...", # - Google Drive URL "file_path": "images1024x1024/00000.png", # - Google Drive path "file_size": 1488194, # - Size of the PNG file in bytes "file_md5": "ddeaeea6ce59569643715759d537fd1b", # - MD5 checksum of the PNG file "pixel_size": [1024, 1024], # - Image dimensions "pixel_md5": "47238b44dfb87644460cbdcc4607e289", # - MD5 checksum of the raw pixel data "face_landmarks": [...] # - 68 face landmarks reported by dlib }, "thumbnail": { # Info about the 128x128 thumbnail: "file_url": "https://drive.google.com/...", # - Google Drive URL "file_path": "thumbnails128x128/00000.png", # - Google Drive path "file_size": 29050, # - Size of the PNG file in bytes "file_md5": "bd3e40b2ba20f76b55dc282907b89cd1", # - MD5 checksum of the PNG file "pixel_size": [128, 128], # - Image dimensions "pixel_md5": "38d7e93eb9a796d0e65f8c64de8ba161" # - MD5 checksum of the raw pixel data }, "in_the_wild": { # Info about the in-the-wild image: "file_url": "https://drive.google.com/...", # - Google Drive URL "file_path": "in-the-wild-images/00000.png", # - Google Drive path "file_size": 3991569, # - Size of the PNG file in bytes "file_md5": "1dc0287e73e485efb0516a80ce9d42b4", # - MD5 checksum of the PNG file "pixel_size": [2016, 1512], # - Image dimensions "pixel_md5": "86b3470c42e33235d76b979161fb2327", # - MD5 checksum of the raw pixel data "face_rect": [667, 410, 1438, 1181], # - Axis-aligned rectangle of the face region "face_landmarks": [...], # - 68 face landmarks reported by dlib "face_quad": [...] # - Aligned quad of the face region } }, ... } ``` ## Acknowledgements We thank Jaakko Lehtinen, David Luebke, and Tuomas Kynk&auml;&auml;nniemi for in-depth discussions and helpful comments; Janne Hellsten, Tero Kuosmanen, and Pekka J&auml;nis for compute infrastructure and help with the code release. We also thank Vahid Kazemi and Josephine Sullivan for their work on automatic face detection and alignment that enabled us to collect the data in the first place: > **One Millisecond Face Alignment with an Ensemble of Regression Trees**<br> > Vahid Kazemi, Josephine Sullivan<br> > Proc. CVPR 2014<br> > https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Kazemi_One_Millisecond_Face_2014_CVPR_paper.pdf
student/FFHQ
[ "region:us" ]
2022-04-16T04:21:25+00:00
{}
2022-04-16T05:24:36+00:00
[]
[]
TAGS #region-us
FFHQ 70000张png图片 链接:URL 提取码:bowj Flickr-Faces-HQ Dataset (FFHQ) ------------------------------ !Python 3.6 !License CC !Format PNG !Resolution 1024×1024 !Images 70000 !Teaser image Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN): > > A Style-Based Generator Architecture for Generative Adversarial Networks > > Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA) > > URL > > > The dataset consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from Flickr, thus inheriting all the biases of that website, and automatically aligned and cropped using dlib. Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally Amazon Mechanical Turk was used to remove the occasional statues, paintings, or photos of photos. For business inquiries, please contact researchinquiries@URL For press and other inquiries, please contact Hector Marinez at hmarinez@URL Licenses -------- The individual images were published in Flickr by their respective authors under either Creative Commons BY 2.0, Creative Commons BY-NC 2.0, Public Domain Mark 1.0, Public Domain CC0 1.0, or U.S. Government Works license. All of these licenses allow free use, redistribution, and adaptation for non-commercial purposes. However, some of them require giving appropriate credit to the original author, as well as indicating any changes that were made to the images. The license and original author of each image are indicated in the metadata. * URL * URL * URL * URL * URL The dataset itself (including JSON metadata, download script, and documentation) is made available under Creative Commons BY-NC-SA 4.0 license by NVIDIA Corporation. You can use, redistribute, and adapt it for non-commercial purposes, as long as you (a) give appropriate credit by citing our paper, (b) indicate any changes that you've made, and (c) distribute any derivative works under the same license. * URL Overview -------- All data is hosted on Google Drive: High-level statistics: !Pie charts For use cases that require separate training and validation sets, we have appointed the first 60,000 images to be used for training and the remaining 10,000 for validation. In the StyleGAN paper, however, we used all 70,000 images for training. We have explicitly made sure that there are no duplicate images in the dataset itself. However, please note that the 'in-the-wild' folder may contain multiple copies of the same image in cases where we extracted several different faces from the same image. Download script --------------- You can either grab the data directly from Google Drive or use the provided download script. The script makes things considerably easier by automatically downloading all the requested files, verifying their checksums, retrying each file several times on error, and employing multiple concurrent connections to maximize bandwidth. The script also serves as a reference implementation of the automated scheme that we used to align and crop the images. Once you have downloaded the in-the-wild images with 'python download\_ffhq.py --wilds', you can run 'python download\_ffhq.py --align' to reproduce exact replicas of the aligned 1024×1024 images using the facial landmark locations included in the metadata. Metadata -------- The 'URL' file contains the following information for each image in a machine-readable format: Acknowledgements ---------------- We thank Jaakko Lehtinen, David Luebke, and Tuomas Kynkäänniemi for in-depth discussions and helpful comments; Janne Hellsten, Tero Kuosmanen, and Pekka Jänis for compute infrastructure and help with the code release. We also thank Vahid Kazemi and Josephine Sullivan for their work on automatic face detection and alignment that enabled us to collect the data in the first place: > > One Millisecond Face Alignment with an Ensemble of Regression Trees > > Vahid Kazemi, Josephine Sullivan > > Proc. CVPR 2014 > > URL > > >
[]
[ "TAGS\n#region-us \n" ]
c31370fdaed32ac5f64fed5ee6d7dd6397f5e47a
# Malay-TTS-Yasmin All notebooks and code related at https://github.com/huseinzol05/malaya-speech/tree/master/data/azure-tts ## Attributes ### Wiki and News - 24000 sample rate, super clean. - narrator `ms-MY-YasminNeural`. - approximate 99.4 hours. - Texts from Malay Wikipedia and News. - Sentences between 2 words and 20 words. ### Parliament - 24000 sample rate, super clean. - narrator `ms-MY-YasminNeural`. - approximate 142 hours. - Texts from Malaysia Malay Parliament. - Sentences between 2 words and 25 words. ## how-to ### Wiki and News 1. Download [populated-text.json](populated-text.json) and [tts-malay-yasmin.tar.gz](tts-malay-yasmin.tar.gz). 2. To get wav and transcript, ```python import json import soundfile as sf with open('populated-text.json') as fopen: texts = json.load(fopen) index = 0 text = texts[index] y, sr = sf.read(f'female/{index}.wav') ``` ### Parliament 1. Download [populated-parliament.json](populated-parliament.json) and [tts-malay-yasmin-parliament.tar.gz](tts-malay-yasmin-parliament.tar.gz). 2. To get wav and transcript, ```python import json import soundfile as sf with open('populated-parliament.json') as fopen: texts = json.load(fopen) index = 0 text = texts[index] y, sr = sf.read(f'female-parliament/{index}.wav') ```
huseinzol05/Malay-TTS-Yasmin
[ "region:us" ]
2022-04-16T08:28:21+00:00
{}
2022-04-25T05:21:15+00:00
[]
[]
TAGS #region-us
# Malay-TTS-Yasmin All notebooks and code related at URL ## Attributes ### Wiki and News - 24000 sample rate, super clean. - narrator 'ms-MY-YasminNeural'. - approximate 99.4 hours. - Texts from Malay Wikipedia and News. - Sentences between 2 words and 20 words. ### Parliament - 24000 sample rate, super clean. - narrator 'ms-MY-YasminNeural'. - approximate 142 hours. - Texts from Malaysia Malay Parliament. - Sentences between 2 words and 25 words. ## how-to ### Wiki and News 1. Download URL and URL. 2. To get wav and transcript, ### Parliament 1. Download URL and URL. 2. To get wav and transcript,
[ "# Malay-TTS-Yasmin\n\nAll notebooks and code related at URL", "## Attributes", "### Wiki and News\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-YasminNeural'.\n- approximate 99.4 hours.\n- Texts from Malay Wikipedia and News.\n- Sentences between 2 words and 20 words.", "### Parliament\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-YasminNeural'.\n- approximate 142 hours.\n- Texts from Malaysia Malay Parliament.\n- Sentences between 2 words and 25 words.", "## how-to", "### Wiki and News\n\n1. Download URL and URL.\n\n2. To get wav and transcript,", "### Parliament\n\n1. Download URL and URL.\n\n2. To get wav and transcript," ]
[ "TAGS\n#region-us \n", "# Malay-TTS-Yasmin\n\nAll notebooks and code related at URL", "## Attributes", "### Wiki and News\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-YasminNeural'.\n- approximate 99.4 hours.\n- Texts from Malay Wikipedia and News.\n- Sentences between 2 words and 20 words.", "### Parliament\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-YasminNeural'.\n- approximate 142 hours.\n- Texts from Malaysia Malay Parliament.\n- Sentences between 2 words and 25 words.", "## how-to", "### Wiki and News\n\n1. Download URL and URL.\n\n2. To get wav and transcript,", "### Parliament\n\n1. Download URL and URL.\n\n2. To get wav and transcript," ]
61ff60cb76c1fabef90bc8013587f0fb6a4fa142
# Dataset Card for Top Quark Tagging ## Table of Contents - [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/2603256 - **Paper:** https://arxiv.org/abs/1902.09914 - **Point of Contact:** [Gregor Kasieczka]([email protected]) ### Dataset Summary Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p_x, p_y, p_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents. ### Supported Tasks and Leaderboards - `tabular-classification`: The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* [accuracy](https://huggingface.co/metrics/accuracy) and AUC score. ## Dataset Structure ### Data Instances Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the `is_signal_new` column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows: ``` {'E_0': 474.0711364746094, 'PX_0': -250.34703063964844, 'PY_0': -223.65196228027344, 'PZ_0': -334.73809814453125, ... 'E_199': 0.0, 'PX_199': 0.0, 'PY_199': 0.0, 'PZ_199': 0.0, 'truthE': 0.0, 'truthPX': 0.0, 'truthPY': 0.0, 'truthPZ': 0.0, 'ttv': 0, 'is_signal_new': 0} ``` ### Data Fields The fields in the dataset have the following meaning: - `E_i`: the energy of jet constituent \\(i\\). - `PX_i`: the \\(x\\) component of the jet constituent's momentum - `PY_i`: the \\(y\\) component of the jet constituent's momentum - `PZ_i`: the \\(z\\) component of the jet constituent's momentum - `truthE`: the energy of the top-quark - `truthPX`: the \\(x\\) component of the top quark's momentum - `truthPY`: the \\(y\\) component of the top quark's momentum - `truthPZ`: the \\(z\\) component of the top quark's momentum - `ttv`: a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset - `is_signal_new`: the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background. ### Data Splits | | train | validation | test | |------------------|--------:|-----------:|-------:| | Number of events | 1211000 | 403000 | 404000 | ### Licensing Information This dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license. ### Citation Information ``` @dataset{kasieczka_gregor_2019_2603256, author = {Kasieczka, Gregor and Plehn, Tilman and Thompson, Jennifer and Russel, Michael}, title = {Top Quark Tagging Reference Dataset}, month = mar, year = 2019, publisher = {Zenodo}, version = {v0 (2018\_03\_27)}, doi = {10.5281/zenodo.2603256}, url = {https://doi.org/10.5281/zenodo.2603256} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
dl4phys/top_tagging
[ "license:cc-by-4.0", "arxiv:1902.09914", "region:us" ]
2022-04-16T08:53:34+00:00
{"license": "cc-by-4.0"}
2022-04-18T06:43:02+00:00
[ "1902.09914" ]
[]
TAGS #license-cc-by-4.0 #arxiv-1902.09914 #region-us
Dataset Card for Top Quark Tagging ================================== Table of Contents ----------------- * Dataset Card for Top Quark Tagging + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards + Dataset Structure - Data Instances - Data Fields - Data Splits - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL * Point of Contact: Gregor Kasieczka ### Dataset Summary Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \( (E, p\_x, p\_y, p\_z) \) are stored, with zero-padding applied to jets with fewer than 200 constituents. ### Supported Tasks and Leaderboards * 'tabular-classification': The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* accuracy and AUC score. Dataset Structure ----------------- ### Data Instances Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \(p\_T\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the 'is\_signal\_new' column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows: ### Data Fields The fields in the dataset have the following meaning: * 'E\_i': the energy of jet constituent \(i\). * 'PX\_i': the \(x\) component of the jet constituent's momentum * 'PY\_i': the \(y\) component of the jet constituent's momentum * 'PZ\_i': the \(z\) component of the jet constituent's momentum * 'truthE': the energy of the top-quark * 'truthPX': the \(x\) component of the top quark's momentum * 'truthPY': the \(y\) component of the top quark's momentum * 'truthPZ': the \(z\) component of the top quark's momentum * 'ttv': a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset * 'is\_signal\_new': the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background. ### Data Splits ### Licensing Information This dataset is released under the Creative Commons Attribution 4.0 International license. ### Contributions Thanks to @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nTop Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p\\_x, p\\_y, p\\_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents.", "### Supported Tasks and Leaderboards\n\n\n* 'tabular-classification': The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* accuracy and AUC score.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p\\_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the 'is\\_signal\\_new' column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:", "### Data Fields\n\n\nThe fields in the dataset have the following meaning:\n\n\n* 'E\\_i': the energy of jet constituent \\(i\\).\n* 'PX\\_i': the \\(x\\) component of the jet constituent's momentum\n* 'PY\\_i': the \\(y\\) component of the jet constituent's momentum\n* 'PZ\\_i': the \\(z\\) component of the jet constituent's momentum\n* 'truthE': the energy of the top-quark\n* 'truthPX': the \\(x\\) component of the top quark's momentum\n* 'truthPY': the \\(y\\) component of the top quark's momentum\n* 'truthPZ': the \\(z\\) component of the top quark's momentum\n* 'ttv': a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset\n* 'is\\_signal\\_new': the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.", "### Data Splits", "### Licensing Information\n\n\nThis dataset is released under the Creative Commons Attribution 4.0 International license.", "### Contributions\n\n\nThanks to @lewtun for adding this dataset." ]
[ "TAGS\n#license-cc-by-4.0 #arxiv-1902.09914 #region-us \n", "### Dataset Summary\n\n\nTop Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p\\_x, p\\_y, p\\_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents.", "### Supported Tasks and Leaderboards\n\n\n* 'tabular-classification': The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* accuracy and AUC score.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p\\_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the 'is\\_signal\\_new' column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:", "### Data Fields\n\n\nThe fields in the dataset have the following meaning:\n\n\n* 'E\\_i': the energy of jet constituent \\(i\\).\n* 'PX\\_i': the \\(x\\) component of the jet constituent's momentum\n* 'PY\\_i': the \\(y\\) component of the jet constituent's momentum\n* 'PZ\\_i': the \\(z\\) component of the jet constituent's momentum\n* 'truthE': the energy of the top-quark\n* 'truthPX': the \\(x\\) component of the top quark's momentum\n* 'truthPY': the \\(y\\) component of the top quark's momentum\n* 'truthPZ': the \\(z\\) component of the top quark's momentum\n* 'ttv': a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset\n* 'is\\_signal\\_new': the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.", "### Data Splits", "### Licensing Information\n\n\nThis dataset is released under the Creative Commons Attribution 4.0 International license.", "### Contributions\n\n\nThanks to @lewtun for adding this dataset." ]
8a852d571fd838da91f2beb879f489f382d1462b
# PLOD: An Abbreviation Detection Dataset This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection. ### Dataset We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here. 1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> 2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> 3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test). # Dataset Card for PLOD-filtered ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection - **Paper:** https://arxiv.org/abs/2204.12061 - **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-filtered - **Point of Contact:** [Diptesh Kanojia](mailto:[email protected]) ### Dataset Summary This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain. ### Supported Tasks and Leaderboards This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022. ### Languages English ## Dataset Structure ### Data Instances A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`. An example from the dataset: {'id': '1', 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], 'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], 'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ### Data Fields - id: the row identifier for the dataset point. - tokens: The tokens contained in the text. - pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER. - ner_tags: The tags for abbreviations and long-forms. ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Filtered | 112652 | 24140 | 24140| | Unfiltered | 113860 | 24399 | 24399| ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Extracting the data from PLOS Journals online and then tokenization, normalization. #### Who are the source language producers? PLOS Journal ## Additional Information ### Dataset Curators The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. ### Licensing Information CC-BY-SA 4.0 ### Citation Information [Needs More Information] ### Installation We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/> Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy. OR<br/> However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps: ```bash git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection cd PLOD-AbbreviationDetection pip install -r requirements.txt ``` Now, you can use the notebook to reproduce the experiments. ### Model(s) Our best performing models are hosted on the HuggingFace models repository | Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description | | --- | :---: | :---: | --- | | [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model | | [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model | | [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model | On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/> ### Usage You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
surrey-nlp/PLOD-filtered
[ "task_categories:token-classification", "annotations_creators:Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "abbreviation-detection", "arxiv:2204.12061", "region:us" ]
2022-04-16T13:50:15+00:00
{"annotations_creators": ["Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan"], "language_creators": ["found"], "language": ["en"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "plod-filtered", "pretty_name": "PLOD: An Abbreviation Detection Dataset", "tags": ["abbreviation-detection"]}
2023-01-14T23:30:12+00:00
[ "2204.12061" ]
[ "en" ]
TAGS #task_categories-token-classification #annotations_creators-Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-4.0 #abbreviation-detection #arxiv-2204.12061 #region-us
PLOD: An Abbreviation Detection Dataset ======================================= This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection. ### Dataset We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here. 1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here. 2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here. 3. The SDU Shared Task data we use for zero-shot testing is available here. Dataset Card for PLOD-filtered ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: Diptesh Kanojia ### Dataset Summary This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain. ### Supported Tasks and Leaderboards This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022. ### Languages English Dataset Structure ----------------- ### Data Instances A typical data point comprises an ID, a set of 'tokens' present in the text, a set of 'pos\_tags' for the corresponding tokens obtained via Spacy NER, and a set of 'ner\_tags' which are limited to 'AC' for 'Acronym' and 'LF' for 'long-forms'. An example from the dataset: {'id': '1', 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], 'pos\_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], 'ner\_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ### Data Fields * id: the row identifier for the dataset point. * tokens: The tokens contained in the text. * pos\_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER. * ner\_tags: The tags for abbreviations and long-forms. ### Data Splits Dataset Creation ---------------- ### Source Data #### Initial Data Collection and Normalization Extracting the data from PLOS Journals online and then tokenization, normalization. #### Who are the source language producers? PLOS Journal Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. ### Licensing Information CC-BY-SA 4.0 ### Installation We use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository. Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy. OR However, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps: Now, you can use the notebook to reproduce the experiments. ### Model(s) Our best performing models are hosted on the HuggingFace models repository On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing. ### Usage You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
[ "### Dataset\n\n\nWe provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.\n\n\n1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n3. The SDU Shared Task data we use for zero-shot testing is available here.\n\n\nDataset Card for PLOD-filtered\n==============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\n\nDataset Description\n-------------------\n\n\n* Homepage:\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: Diptesh Kanojia", "### Dataset Summary\n\n\nThis PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.", "### Supported Tasks and Leaderboards\n\n\nThis dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises an ID, a set of 'tokens' present in the text, a set of 'pos\\_tags' for the corresponding tokens obtained via Spacy NER, and a set of 'ner\\_tags' which are limited to 'AC' for 'Acronym' and 'LF' for 'long-forms'.\n\n\nAn example from the dataset:\n{'id': '1',\n'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],\n'pos\\_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],\n'ner\\_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n}", "### Data Fields\n\n\n* id: the row identifier for the dataset point.\n* tokens: The tokens contained in the text.\n* pos\\_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.\n* ner\\_tags: The tags for abbreviations and long-forms.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nExtracting the data from PLOS Journals online and then tokenization, normalization.", "#### Who are the source language producers?\n\n\nPLOS Journal\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,\nDiptesh Kanojia, Constantin Orasan.", "### Licensing Information\n\n\nCC-BY-SA 4.0", "### Installation\n\n\nWe use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository. \n\nPlease see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.\n\n\nOR \n\n\n\nHowever, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:\n\n\nNow, you can use the notebook to reproduce the experiments.", "### Model(s)\n\n\nOur best performing models are hosted on the HuggingFace models repository\n\n\n\nOn the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.", "### Usage\n\n\nYou can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-4.0 #abbreviation-detection #arxiv-2204.12061 #region-us \n", "### Dataset\n\n\nWe provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.\n\n\n1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n3. The SDU Shared Task data we use for zero-shot testing is available here.\n\n\nDataset Card for PLOD-filtered\n==============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\n\nDataset Description\n-------------------\n\n\n* Homepage:\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: Diptesh Kanojia", "### Dataset Summary\n\n\nThis PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.", "### Supported Tasks and Leaderboards\n\n\nThis dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises an ID, a set of 'tokens' present in the text, a set of 'pos\\_tags' for the corresponding tokens obtained via Spacy NER, and a set of 'ner\\_tags' which are limited to 'AC' for 'Acronym' and 'LF' for 'long-forms'.\n\n\nAn example from the dataset:\n{'id': '1',\n'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],\n'pos\\_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],\n'ner\\_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n}", "### Data Fields\n\n\n* id: the row identifier for the dataset point.\n* tokens: The tokens contained in the text.\n* pos\\_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.\n* ner\\_tags: The tags for abbreviations and long-forms.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nExtracting the data from PLOS Journals online and then tokenization, normalization.", "#### Who are the source language producers?\n\n\nPLOS Journal\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,\nDiptesh Kanojia, Constantin Orasan.", "### Licensing Information\n\n\nCC-BY-SA 4.0", "### Installation\n\n\nWe use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository. \n\nPlease see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.\n\n\nOR \n\n\n\nHowever, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:\n\n\nNow, you can use the notebook to reproduce the experiments.", "### Model(s)\n\n\nOur best performing models are hosted on the HuggingFace models repository\n\n\n\nOn the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.", "### Usage\n\n\nYou can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo." ]
9caa35b94fe9c220e003d581e3b37e55012993b5
# PLOD: An Abbreviation Detection Dataset This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection. ### Dataset We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here. 1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> 2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> 3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test). # Dataset Card for PLOD-unfiltered ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection - **Paper:** https://arxiv.org/abs/2204.12061 - **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-an-abbreviation - **Point of Contact:** [Diptesh Kanojia](mailto:[email protected]) ### Dataset Summary This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain. ### Supported Tasks and Leaderboards This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022. ### Languages English ## Dataset Structure ### Data Instances A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`. An example from the dataset: {'id': '1', 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], 'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], 'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ### Data Fields - id: the row identifier for the dataset point. - tokens: The tokens contained in the text. - pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER. - ner_tags: The tags for abbreviations and long-forms. ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Filtered | 112652 | 24140 | 24140| | Unfiltered | 113860 | 24399 | 24399| ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Extracting the data from PLOS Journals online and then tokenization, normalization. #### Who are the source language producers? PLOS Journal ## Additional Information ### Dataset Curators The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. ### Licensing Information CC-BY-SA 4.0 ### Citation Information [Needs More Information] ### Installation We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/> Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy. OR<br/> However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps: ```bash git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection cd PLOD-AbbreviationDetection pip install -r requirements.txt ``` Now, you can use the notebook to reproduce the experiments. ### Model(s) Our best performing models are hosted on the HuggingFace models repository: | Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description | | --- | :---: | :---: | --- | | [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model | | [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model | | [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model | On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/> ### Usage You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
surrey-nlp/PLOD-unfiltered
[ "task_categories:token-classification", "annotations_creators:Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "abbreviation-detection", "arxiv:2204.12061", "region:us" ]
2022-04-16T17:49:49+00:00
{"annotations_creators": ["Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "plod-an-abbreviation-detection-dataset-for", "pretty_name": "PLOD: An Abbreviation Detection Dataset", "tags": ["abbreviation-detection"]}
2023-01-14T23:31:04+00:00
[ "2204.12061" ]
[ "en" ]
TAGS #task_categories-token-classification #annotations_creators-Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-4.0 #abbreviation-detection #arxiv-2204.12061 #region-us
PLOD: An Abbreviation Detection Dataset ======================================= This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection. ### Dataset We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here. 1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here. 2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here. 3. The SDU Shared Task data we use for zero-shot testing is available here. Dataset Card for PLOD-unfiltered ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: Diptesh Kanojia ### Dataset Summary This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain. ### Supported Tasks and Leaderboards This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022. ### Languages English Dataset Structure ----------------- ### Data Instances A typical data point comprises an ID, a set of 'tokens' present in the text, a set of 'pos\_tags' for the corresponding tokens obtained via Spacy NER, and a set of 'ner\_tags' which are limited to 'AC' for 'Acronym' and 'LF' for 'long-forms'. An example from the dataset: {'id': '1', 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], 'pos\_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], 'ner\_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ### Data Fields * id: the row identifier for the dataset point. * tokens: The tokens contained in the text. * pos\_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER. * ner\_tags: The tags for abbreviations and long-forms. ### Data Splits Dataset Creation ---------------- ### Source Data #### Initial Data Collection and Normalization Extracting the data from PLOS Journals online and then tokenization, normalization. #### Who are the source language producers? PLOS Journal Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. ### Licensing Information CC-BY-SA 4.0 ### Installation We use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository. Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy. OR However, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps: Now, you can use the notebook to reproduce the experiments. ### Model(s) Our best performing models are hosted on the HuggingFace models repository: On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing. ### Usage You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
[ "### Dataset\n\n\nWe provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.\n\n\n1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n3. The SDU Shared Task data we use for zero-shot testing is available here.\n\n\nDataset Card for PLOD-unfiltered\n================================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\n\nDataset Description\n-------------------\n\n\n* Homepage:\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: Diptesh Kanojia", "### Dataset Summary\n\n\nThis PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.", "### Supported Tasks and Leaderboards\n\n\nThis dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises an ID, a set of 'tokens' present in the text, a set of 'pos\\_tags' for the corresponding tokens obtained via Spacy NER, and a set of 'ner\\_tags' which are limited to 'AC' for 'Acronym' and 'LF' for 'long-forms'.\n\n\nAn example from the dataset:\n{'id': '1',\n'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],\n'pos\\_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],\n'ner\\_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n}", "### Data Fields\n\n\n* id: the row identifier for the dataset point.\n* tokens: The tokens contained in the text.\n* pos\\_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.\n* ner\\_tags: The tags for abbreviations and long-forms.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nExtracting the data from PLOS Journals online and then tokenization, normalization.", "#### Who are the source language producers?\n\n\nPLOS Journal\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,\nDiptesh Kanojia, Constantin Orasan.", "### Licensing Information\n\n\nCC-BY-SA 4.0", "### Installation\n\n\nWe use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository. \n\nPlease see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.\n\n\nOR \n\n\n\nHowever, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:\n\n\nNow, you can use the notebook to reproduce the experiments.", "### Model(s)\n\n\nOur best performing models are hosted on the HuggingFace models repository:\n\n\n\nOn the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.", "### Usage\n\n\nYou can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-4.0 #abbreviation-detection #arxiv-2204.12061 #region-us \n", "### Dataset\n\n\nWe provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.\n\n\n1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here.\n3. The SDU Shared Task data we use for zero-shot testing is available here.\n\n\nDataset Card for PLOD-unfiltered\n================================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\n\nDataset Description\n-------------------\n\n\n* Homepage:\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: Diptesh Kanojia", "### Dataset Summary\n\n\nThis PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.", "### Supported Tasks and Leaderboards\n\n\nThis dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises an ID, a set of 'tokens' present in the text, a set of 'pos\\_tags' for the corresponding tokens obtained via Spacy NER, and a set of 'ner\\_tags' which are limited to 'AC' for 'Acronym' and 'LF' for 'long-forms'.\n\n\nAn example from the dataset:\n{'id': '1',\n'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],\n'pos\\_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],\n'ner\\_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n}", "### Data Fields\n\n\n* id: the row identifier for the dataset point.\n* tokens: The tokens contained in the text.\n* pos\\_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.\n* ner\\_tags: The tags for abbreviations and long-forms.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nExtracting the data from PLOS Journals online and then tokenization, normalization.", "#### Who are the source language producers?\n\n\nPLOS Journal\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,\nDiptesh Kanojia, Constantin Orasan.", "### Licensing Information\n\n\nCC-BY-SA 4.0", "### Installation\n\n\nWe use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository. \n\nPlease see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.\n\n\nOR \n\n\n\nHowever, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:\n\n\nNow, you can use the notebook to reproduce the experiments.", "### Model(s)\n\n\nOur best performing models are hosted on the HuggingFace models repository:\n\n\n\nOn the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.", "### Usage\n\n\nYou can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo." ]
1bb2f5816caf50002da4e7bd5ec845fec22eb4cd
# AutoTrain Dataset for project: Tweets ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project Tweets. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "So the mask mandate goes away the day after #Furnal2022 ends, and you know what will happen after th[...]", "target": 0 }, { "text": "@EwanMacKenna Also does anyone know whether Margaret Buttimer of Bandon is still in prison for the '[...]", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=3, names=['1', '2', '3'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1679 | | valid | 420 |
Paercky/autotrain-data-Tweets
[ "task_categories:text-classification", "language:en", "region:us" ]
2022-04-16T20:46:11+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2022-10-25T09:08:35+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #region-us
AutoTrain Dataset for project: Tweets ===================================== Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project Tweets. ### Languages The BCP-47 code for the dataset's language is en. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #language-English #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
a4d324be68761bd614dd3be85ccaac497001fabb
# Malay-TTS-Osman All notebooks and code related at https://github.com/huseinzol05/malaya-speech/tree/master/data/azure-tts ## Attributes ### Wiki and News - 24000 sample rate, super clean. - narrator `ms-MY-OsmanNeural`. - approximate 94.5 hours - Texts from Malay Wikipedia and News. - Sentences between 2 words and 20 words. ### Parliament - 24000 sample rate, super clean. - narrator `ms-MY-OsmanNeural`. - approximate 133.2 hours. - Texts from Malaysia Malay Parliament. - Sentences between 2 words and 25 words. ## how-to ### Wiki and News 1. Download [populated-text.json](populated-text.json) and [tts-malay-osman.tar.gz](tts-malay-osman.tar.gz). 2. To get wav and transcript, ```python import json import soundfile as sf with open('populated-text.json') as fopen: texts = json.load(fopen) index = 0 text = texts[index] y, sr = sf.read(f'male/{index}.wav') ``` ### Parliament 1. Download [populated-parliament.json](populated-parliament.json) and [tts-malay-osman-parliament.tar.gz](tts-malay-osman-parliament.tar.gz). 2. To get wav and transcript, ```python import json import soundfile as sf with open('populated-parliament.json') as fopen: texts = json.load(fopen) index = 0 text = texts[index] y, sr = sf.read(f'male-parliament/{index}.wav') ```
huseinzol05/Malay-TTS-Osman
[ "region:us" ]
2022-04-16T22:15:43+00:00
{}
2022-04-17T04:39:21+00:00
[]
[]
TAGS #region-us
# Malay-TTS-Osman All notebooks and code related at URL ## Attributes ### Wiki and News - 24000 sample rate, super clean. - narrator 'ms-MY-OsmanNeural'. - approximate 94.5 hours - Texts from Malay Wikipedia and News. - Sentences between 2 words and 20 words. ### Parliament - 24000 sample rate, super clean. - narrator 'ms-MY-OsmanNeural'. - approximate 133.2 hours. - Texts from Malaysia Malay Parliament. - Sentences between 2 words and 25 words. ## how-to ### Wiki and News 1. Download URL and URL. 2. To get wav and transcript, ### Parliament 1. Download URL and URL. 2. To get wav and transcript,
[ "# Malay-TTS-Osman\n\nAll notebooks and code related at URL", "## Attributes", "### Wiki and News\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-OsmanNeural'.\n- approximate 94.5 hours\n- Texts from Malay Wikipedia and News.\n- Sentences between 2 words and 20 words.", "### Parliament\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-OsmanNeural'.\n- approximate 133.2 hours.\n- Texts from Malaysia Malay Parliament.\n- Sentences between 2 words and 25 words.", "## how-to", "### Wiki and News\n\n1. Download URL and URL.\n\n2. To get wav and transcript,", "### Parliament\n\n1. Download URL and URL.\n\n2. To get wav and transcript," ]
[ "TAGS\n#region-us \n", "# Malay-TTS-Osman\n\nAll notebooks and code related at URL", "## Attributes", "### Wiki and News\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-OsmanNeural'.\n- approximate 94.5 hours\n- Texts from Malay Wikipedia and News.\n- Sentences between 2 words and 20 words.", "### Parliament\n\n- 24000 sample rate, super clean.\n- narrator 'ms-MY-OsmanNeural'.\n- approximate 133.2 hours.\n- Texts from Malaysia Malay Parliament.\n- Sentences between 2 words and 25 words.", "## how-to", "### Wiki and News\n\n1. Download URL and URL.\n\n2. To get wav and transcript,", "### Parliament\n\n1. Download URL and URL.\n\n2. To get wav and transcript," ]
7092c27872e919f31d0496fb8b9c47bd2cba3f6c
# Dataset Card for "IndicXNLI" ## Table of Contents - [Dataset Card for "IndicXNLI"](#dataset-card-for-indicxnli) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description - **Homepage:** <https://github.com/divyanshuaggarwal/IndicXNLI> - **Paper:** [IndicXNLI: Evaluating Multilingual Inference for Indian Languages](https://arxiv.org/abs/2204.08776) - **Point of Contact:** [Divyanshu Aggarwal](mailto:[email protected]) ### Dataset Summary INDICXNLI is similar to existing XNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI data for eleven major Indic languages that includes Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’), Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’), Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi (‘hi’), and Bengali (‘bn’). ### Supported Tasks and Leaderboards **Tasks:** Natural Language Inference **Leaderboards:** Currently there is no Leaderboard for this dataset. ### Languages - `Assamese (as)` - `Bengali (bn)` - `Gujarati (gu)` - `Kannada (kn)` - `Hindi (hi)` - `Malayalam (ml)` - `Marathi (mr)` - `Oriya (or)` - `Punjabi (pa)` - `Tamil (ta)` - `Telugu (te)` ## Dataset Structure ### Data Instances One example from the `hi` dataset is given below in JSON format. ```python {'premise': 'अवधारणात्मक रूप से क्रीम स्किमिंग के दो बुनियादी आयाम हैं-उत्पाद और भूगोल।', 'hypothesis': 'उत्पाद और भूगोल क्रीम स्किमिंग का काम करते हैं।', 'label': 1 (neutral) } ``` ### Data Fields - `premise (string)`: Premise Sentence - `hypothesis (string)`: Hypothesis Sentence - `label (integer)`: Integer label `0` if hypothesis `entails` the premise, `2` if hypothesis `negates` the premise and `1` otherwise. ### Data Splits <!-- Below is the dataset split given for `hi` dataset. ```python DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 392702 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 5010 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 2490 }) }) ``` --> Language | ISO 639-1 Code |Train | Test | Dev | --------------|----------------|-------|-----|------| Assamese | as | 392,702 | 5,010 | 2,490 | Bengali | bn | 392,702 | 5,010 | 2,490 | Gujarati | gu | 392,702 | 5,010 | 2,490 | Hindi | hi | 392,702 | 5,010 | 2,490 | Kannada | kn | 392,702 | 5,010 | 2,490 | Malayalam | ml |392,702 | 5,010 | 2,490 | Marathi | mr |392,702 | 5,010 | 2,490 | Oriya | or | 392,702 | 5,010 | 2,490 | Punjabi | pa | 392,702 | 5,010 | 2,490 | Tamil | ta | 392,702 | 5,010 | 2,490 | Telugu | te | 392,702 | 5,010 | 2,490 | <!-- The dataset split remains same across all languages. --> ## Dataset usage Code snippet for using the dataset using datasets library. ```python from datasets import load_dataset dataset = load_dataset("Divyanshu/indicxnli") ``` ## Dataset Creation Machine translation of XNLI english dataset to 11 listed Indic Languages. ### Curation Rationale [More information needed] ### Source Data [XNLI dataset](https://cims.nyu.edu/~sbowman/xnli/) #### Initial Data Collection and Normalization [Detailed in the paper](https://arxiv.org/abs/2204.08776) #### Who are the source language producers? [Detailed in the paper](https://arxiv.org/abs/2204.08776) #### Human Verification Process [Detailed in the paper](https://arxiv.org/abs/2204.08776) ## Considerations for Using the Data ### Social Impact of Dataset [Detailed in the paper](https://arxiv.org/abs/2204.08776) ### Discussion of Biases [Detailed in the paper](https://arxiv.org/abs/2204.08776) ### Other Known Limitations [Detailed in the paper](https://arxiv.org/abs/2204.08776) ### Dataset Curators Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @misc{https://doi.org/10.48550/arxiv.2204.08776, doi = {10.48550/ARXIV.2204.08776}, url = {https://arxiv.org/abs/2204.08776}, author = {Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {IndicXNLI: Evaluating Multilingual Inference for Indian Languages}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ### Contributions -->
Divyanshu/indicxnli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:as", "language:bn", "language:gu", "language:hi", "language:kn", "language:ml", "language:mr", "language:or", "language:pa", "language:ta", "language:te", "license:cc0-1.0", "arxiv:2204.08776", "region:us" ]
2022-04-17T16:48:10+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "IndicXNLI"}
2022-10-06T14:26:00+00:00
[ "2204.08776" ]
[ "as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc0-1.0 #arxiv-2204.08776 #region-us
Dataset Card for "IndicXNLI" ============================ Table of Contents ----------------- * Dataset Card for "IndicXNLI" + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits Dataset Description ------------------- * Homepage: <URL * Paper: IndicXNLI: Evaluating Multilingual Inference for Indian Languages * Point of Contact: Divyanshu Aggarwal ### Dataset Summary INDICXNLI is similar to existing XNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI data for eleven major Indic languages that includes Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’), Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’), Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi (‘hi’), and Bengali (‘bn’). ### Supported Tasks and Leaderboards Tasks: Natural Language Inference Leaderboards: Currently there is no Leaderboard for this dataset. ### Languages * 'Assamese (as)' * 'Bengali (bn)' * 'Gujarati (gu)' * 'Kannada (kn)' * 'Hindi (hi)' * 'Malayalam (ml)' * 'Marathi (mr)' * 'Oriya (or)' * 'Punjabi (pa)' * 'Tamil (ta)' * 'Telugu (te)' Dataset Structure ----------------- ### Data Instances One example from the 'hi' dataset is given below in JSON format. ### Data Fields * 'premise (string)': Premise Sentence * 'hypothesis (string)': Hypothesis Sentence * 'label (integer)': Integer label '0' if hypothesis 'entails' the premise, '2' if hypothesis 'negates' the premise and '1' otherwise. ### Data Splits Dataset usage ------------- Code snippet for using the dataset using datasets library. Dataset Creation ---------------- Machine translation of XNLI english dataset to 11 listed Indic Languages. ### Curation Rationale [More information needed] ### Source Data XNLI dataset #### Initial Data Collection and Normalization Detailed in the paper #### Who are the source language producers? Detailed in the paper #### Human Verification Process Detailed in the paper Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Detailed in the paper ### Discussion of Biases Detailed in the paper ### Other Known Limitations Detailed in the paper ### Dataset Curators Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders. If you use any of the datasets, models or code modules, please cite the following paper:
[ "### Dataset Summary\n\n\nINDICXNLI is similar to existing\nXNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI\ndata for eleven major Indic languages that includes\nAssamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’),\nMalayalam (‘ml’), Marathi (‘mr’), Odia (‘or’),\nPunjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi\n(‘hi’), and Bengali (‘bn’).", "### Supported Tasks and Leaderboards\n\n\nTasks: Natural Language Inference\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.", "### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nOne example from the 'hi' dataset is given below in JSON format.", "### Data Fields\n\n\n* 'premise (string)': Premise Sentence\n* 'hypothesis (string)': Hypothesis Sentence\n* 'label (integer)': Integer label '0' if hypothesis 'entails' the premise, '2' if hypothesis 'negates' the premise and '1' otherwise.", "### Data Splits\n\n\n\nDataset usage\n-------------\n\n\nCode snippet for using the dataset using datasets library.\n\n\nDataset Creation\n----------------\n\n\nMachine translation of XNLI english dataset to 11 listed Indic Languages.", "### Curation Rationale\n\n\n[More information needed]", "### Source Data\n\n\nXNLI dataset", "#### Initial Data Collection and Normalization\n\n\nDetailed in the paper", "#### Who are the source language producers?\n\n\nDetailed in the paper", "#### Human Verification Process\n\n\nDetailed in the paper\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nDetailed in the paper", "### Discussion of Biases\n\n\nDetailed in the paper", "### Other Known Limitations\n\n\nDetailed in the paper", "### Dataset Curators\n\n\nDivyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan", "### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:" ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc0-1.0 #arxiv-2204.08776 #region-us \n", "### Dataset Summary\n\n\nINDICXNLI is similar to existing\nXNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI\ndata for eleven major Indic languages that includes\nAssamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’),\nMalayalam (‘ml’), Marathi (‘mr’), Odia (‘or’),\nPunjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi\n(‘hi’), and Bengali (‘bn’).", "### Supported Tasks and Leaderboards\n\n\nTasks: Natural Language Inference\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.", "### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nOne example from the 'hi' dataset is given below in JSON format.", "### Data Fields\n\n\n* 'premise (string)': Premise Sentence\n* 'hypothesis (string)': Hypothesis Sentence\n* 'label (integer)': Integer label '0' if hypothesis 'entails' the premise, '2' if hypothesis 'negates' the premise and '1' otherwise.", "### Data Splits\n\n\n\nDataset usage\n-------------\n\n\nCode snippet for using the dataset using datasets library.\n\n\nDataset Creation\n----------------\n\n\nMachine translation of XNLI english dataset to 11 listed Indic Languages.", "### Curation Rationale\n\n\n[More information needed]", "### Source Data\n\n\nXNLI dataset", "#### Initial Data Collection and Normalization\n\n\nDetailed in the paper", "#### Who are the source language producers?\n\n\nDetailed in the paper", "#### Human Verification Process\n\n\nDetailed in the paper\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nDetailed in the paper", "### Discussion of Biases\n\n\nDetailed in the paper", "### Other Known Limitations\n\n\nDetailed in the paper", "### Dataset Curators\n\n\nDivyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan", "### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:" ]
539315f30d0cfd1aa5765a65a4dcd6d93d168d20
ዛጎል ዜና- መንግስት አምስት ሺህ የሚጠጉ እስረኞችን “ተመራቂዎች” በሚል መፍታቱን ይፋ ባደረገበት ቀን በተመሳሳይ አምቦ ተማሪዎች ተቃውሞ ማሰማታቸው ተሰማ። ተማሪዎቹ የአስቸኳይ አዋጁን በመጣስ ” መረራ ይፈታ” እያሉ ተቃውሞ መጀመራቸው ነው የተሰማው። ከትምህርት ቤት ወደ ትምህርት ቤት የሰፋው ተቃውሞ ብህይወት ላይ አደጋ ባያስከትልም በንብረት ላይ ግን ጉዳት አድርሷል። መኪና ሲቃጠል ያዩ የአይን ምስክሮች ተቃውሞውን በጀመሩት ላይም ሆነ ዘግይተው በተቀላቀሉት ላይ እንደ ቀደሞው ያለ የሃይል እርምጃ አልተወሰደም። የኦሮሚያ ሚዲያ ኔት ወርክ እንዳለው ደግሞ በርካታ ሰዎች ታስረዋል። ለወትሮው ህገ መንግስቱን በሃይል ለመናድ የተነሱ፣ የነውጥ ሃይሎች፣ አተራማሾች፣ የጥፋት ሃይል ተላላኪዎች በሚል ተጠርጥረው በቁጥጥር ስር ከዋሉት መካከል 4035 የሚሆኑት ሲፈቱ እስረኞቹ “ስድስት ኮርስ ወስደው ተመረቁ” ነው የተባለው። የኦሮሚያ ማረሚያ ቤቶች አስተዳደር ኮሚሽነር ፀሃይ በላይን ጠቅሶ ፋና እንደዘገበው ጦላይ ተሃድሶ ማዕከል ከገቡ 5 ሺህ 600 ሰልጣኞች መካከል 4035 ያህሉ በስድስት ዋና ዋና ጉዳዮች ሥልጠና ወስደው ተመርቀዋል። ኮርሶቹም በፍፁም፣ አይደገምም፣ የቀለም አብዮት፣ የኢትዮጰያ ህገ–መንግስት እና የኢትዮጵያ ህዳሴ የሚሉ ርዕሰ ጉዳዮችን የተካተቱባቸው ነው። አበምርቃቱ ላይ ጠቅላይ ሚኒስትር ሃይለማርያም ተገኝተው “ ሽኝት” አደርጉላቸው ተብሏል። በርካታ ቃል ተገብቶላቸዋል። መስመርም ተሰምሮላቸዋል። “በደምና በአጥንት የተጻፈውን ሕገመንግስት፣ ዋጋ የተከፈለበትን ህገመንግስት” በማለት አቶ ሃይለማርያም በሃይል ለመናድ መሞከር አይቻልም በለዋል። “ ልክ እናንተ አይደገምም እንዳላችሁት፣ እኛም አይደገም እንላለን” ብለዋል። የፋና ዘገባ እንዲህ ይነበባል። አዲስ አበባ ፣ ታህሳስ 12 ፣ 2009 (ኤፍ ቢ ሲ) በሃገሪቱ የተለያዩ አካባቢዎች በተፈጠረው ሁከት ውስጥ ተሳትፈው በማሰልጠኛ ጣቢያዎች የተሃድሶ ስልጠና ሲወስዱ የነበሩ ዜጎች ወደ መጡበት እየተመለሱ ነው። በአዋሽ፣ አላጌና ብር ሸለቆ ማዕከላት የተሃድሶ ስልጠና የወሰዱ ዜጎች ናቸው ወደ አካባቢያቸው እየተመለሱ ያሉት። በጦላይ ለአንድ ወር የተሃድሶ ስልጠና የወሰዱ 4 ሺህ 35 ዜጎችም ሥልጠናቸውን አጠናቀው ነገ ወደ መጡበት አካባቢ ይመለሳሉ ተብሏል። በጦላይ የተሃድሶ ማዕከል የተገኙት ጠቅላይ ሚኒስትር ኃይለማርያም ደሳለኝ በዚሁ ጊዜ ባስተላለፉት መልዕክት ሰልጣኞች ወደ መደበኛ ህይወታቸው እንዲመለሱ መንግሥት ድጋፍ ያደርጋል ብለዋል። ሠራተኞች ወደ ሥራ ገበታቸው እንዲመለሱ የሚደረግ ሲሆን ተማሪዎች ደግሞ ትምህርታቸው እንዲቀጥሉ ይደረጋልም ነው ያሉት ጠቅላይ ሚኒስትር ኃይለማርያም። ሥራ አጥ የሆኑ ወጣቶችም በራሳቸው መንገድ ሥራ እንዲፈጥሩ ድጋፍ እንደሚደረግላቸው ጠቅላይ ሚኒስትሩ ገልጸዋል። ሠላም፣ ልማትና ዴሞክራሲ የማይነጣጡ የአንድ አገር ህልውና መሰረት መሆናቸውን ወጣቱ ተገንዝቦ እነዚህን እሴቶች የመጠበቅ ኃላፊነቱን እንዲወጣ ጠይቀዋል። ወጣቱ ጥያቄ እንኳ ቢኖረው ሕገ-መንግሥቱ በሚፈቅደው መሰረት የማቅረብና መልስ የማግኘት መብት እንዳለው ገልጸዋል። ባለፉት ወራት እንደታየው ጥያቄውን በአመጽና ግርግር መጠየቁ ዋጋ እንዳስከፈለ ለማሳያነት በማንሳት። እንዲህ ዓይነት ሁኔታ እንዳይደገም መንግሥትም የራሱን ስህተት ለማረም ጥልቅ ተሃድሶ እያደረገ መሆኑን ገልጸው ወጣቱም የራሱን ስህተት በማረም ከመንግሥት ጋር በመሆን ሠላሙን እንዲጠብቅ መልዕክት አስተላልፈዋል። የኦሮሚያ ክልል ርዕሰ መስተዳደር አቶ ለማ መገርሳ በበኩላቸው በክልሉ የሰፈነውን ሠላም ለማስቀጠል ከሁሉም የህብረተሰብ ክፍል ጋር በቅንጅት ሥራዎች ይሰራሉ ብለዋል። ከወራት በፊት በተፈጠረው ሁከትና ግርግር ህይወት የጠፋ መሆኑን ገልጸው ለዘመናት የተለፋባቸው የህዝብ ኃብቶችም መውደማቸው አግባብ አለመሆኑን ተናግረዋል። ክልሉ ሊለወጥና ሊለማ የሚችለው የክልሉ ወጣቶች ለሠላም በጋራ ዘብ ሲቆሙ እንደሆነም አስምረውበታል። አሁን ወደ
surafelkindu/Amharic_corpus
[ "license:mit", "region:us" ]
2022-04-17T17:06:43+00:00
{"license": "mit"}
2022-04-17T17:19:47+00:00
[]
[]
TAGS #license-mit #region-us
ዛጎል ዜና- መንግስት አምስት ሺህ የሚጠጉ እስረኞችን “ተመራቂዎች” በሚል መፍታቱን ይፋ ባደረገበት ቀን በተመሳሳይ አምቦ ተማሪዎች ተቃውሞ ማሰማታቸው ተሰማ። ተማሪዎቹ የአስቸኳይ አዋጁን በመጣስ ” መረራ ይፈታ” እያሉ ተቃውሞ መጀመራቸው ነው የተሰማው። ከትምህርት ቤት ወደ ትምህርት ቤት የሰፋው ተቃውሞ ብህይወት ላይ አደጋ ባያስከትልም በንብረት ላይ ግን ጉዳት አድርሷል። መኪና ሲቃጠል ያዩ የአይን ምስክሮች ተቃውሞውን በጀመሩት ላይም ሆነ ዘግይተው በተቀላቀሉት ላይ እንደ ቀደሞው ያለ የሃይል እርምጃ አልተወሰደም። የኦሮሚያ ሚዲያ ኔት ወርክ እንዳለው ደግሞ በርካታ ሰዎች ታስረዋል። ለወትሮው ህገ መንግስቱን በሃይል ለመናድ የተነሱ፣ የነውጥ ሃይሎች፣ አተራማሾች፣ የጥፋት ሃይል ተላላኪዎች በሚል ተጠርጥረው በቁጥጥር ስር ከዋሉት መካከል 4035 የሚሆኑት ሲፈቱ እስረኞቹ “ስድስት ኮርስ ወስደው ተመረቁ” ነው የተባለው። የኦሮሚያ ማረሚያ ቤቶች አስተዳደር ኮሚሽነር ፀሃይ በላይን ጠቅሶ ፋና እንደዘገበው ጦላይ ተሃድሶ ማዕከል ከገቡ 5 ሺህ 600 ሰልጣኞች መካከል 4035 ያህሉ በስድስት ዋና ዋና ጉዳዮች ሥልጠና ወስደው ተመርቀዋል። ኮርሶቹም በፍፁም፣ አይደገምም፣ የቀለም አብዮት፣ የኢትዮጰያ ህገ–መንግስት እና የኢትዮጵያ ህዳሴ የሚሉ ርዕሰ ጉዳዮችን የተካተቱባቸው ነው። አበምርቃቱ ላይ ጠቅላይ ሚኒስትር ሃይለማርያም ተገኝተው “ ሽኝት” አደርጉላቸው ተብሏል። በርካታ ቃል ተገብቶላቸዋል። መስመርም ተሰምሮላቸዋል። “በደምና በአጥንት የተጻፈውን ሕገመንግስት፣ ዋጋ የተከፈለበትን ህገመንግስት” በማለት አቶ ሃይለማርያም በሃይል ለመናድ መሞከር አይቻልም በለዋል። “ ልክ እናንተ አይደገምም እንዳላችሁት፣ እኛም አይደገም እንላለን” ብለዋል። የፋና ዘገባ እንዲህ ይነበባል። አዲስ አበባ ፣ ታህሳስ 12 ፣ 2009 (ኤፍ ቢ ሲ) በሃገሪቱ የተለያዩ አካባቢዎች በተፈጠረው ሁከት ውስጥ ተሳትፈው በማሰልጠኛ ጣቢያዎች የተሃድሶ ስልጠና ሲወስዱ የነበሩ ዜጎች ወደ መጡበት እየተመለሱ ነው። በአዋሽ፣ አላጌና ብር ሸለቆ ማዕከላት የተሃድሶ ስልጠና የወሰዱ ዜጎች ናቸው ወደ አካባቢያቸው እየተመለሱ ያሉት። በጦላይ ለአንድ ወር የተሃድሶ ስልጠና የወሰዱ 4 ሺህ 35 ዜጎችም ሥልጠናቸውን አጠናቀው ነገ ወደ መጡበት አካባቢ ይመለሳሉ ተብሏል። በጦላይ የተሃድሶ ማዕከል የተገኙት ጠቅላይ ሚኒስትር ኃይለማርያም ደሳለኝ በዚሁ ጊዜ ባስተላለፉት መልዕክት ሰልጣኞች ወደ መደበኛ ህይወታቸው እንዲመለሱ መንግሥት ድጋፍ ያደርጋል ብለዋል። ሠራተኞች ወደ ሥራ ገበታቸው እንዲመለሱ የሚደረግ ሲሆን ተማሪዎች ደግሞ ትምህርታቸው እንዲቀጥሉ ይደረጋልም ነው ያሉት ጠቅላይ ሚኒስትር ኃይለማርያም። ሥራ አጥ የሆኑ ወጣቶችም በራሳቸው መንገድ ሥራ እንዲፈጥሩ ድጋፍ እንደሚደረግላቸው ጠቅላይ ሚኒስትሩ ገልጸዋል። ሠላም፣ ልማትና ዴሞክራሲ የማይነጣጡ የአንድ አገር ህልውና መሰረት መሆናቸውን ወጣቱ ተገንዝቦ እነዚህን እሴቶች የመጠበቅ ኃላፊነቱን እንዲወጣ ጠይቀዋል። ወጣቱ ጥያቄ እንኳ ቢኖረው ሕገ-መንግሥቱ በሚፈቅደው መሰረት የማቅረብና መልስ የማግኘት መብት እንዳለው ገልጸዋል። ባለፉት ወራት እንደታየው ጥያቄውን በአመጽና ግርግር መጠየቁ ዋጋ እንዳስከፈለ ለማሳያነት በማንሳት። እንዲህ ዓይነት ሁኔታ እንዳይደገም መንግሥትም የራሱን ስህተት ለማረም ጥልቅ ተሃድሶ እያደረገ መሆኑን ገልጸው ወጣቱም የራሱን ስህተት በማረም ከመንግሥት ጋር በመሆን ሠላሙን እንዲጠብቅ መልዕክት አስተላልፈዋል። የኦሮሚያ ክልል ርዕሰ መስተዳደር አቶ ለማ መገርሳ በበኩላቸው በክልሉ የሰፈነውን ሠላም ለማስቀጠል ከሁሉም የህብረተሰብ ክፍል ጋር በቅንጅት ሥራዎች ይሰራሉ ብለዋል። ከወራት በፊት በተፈጠረው ሁከትና ግርግር ህይወት የጠፋ መሆኑን ገልጸው ለዘመናት የተለፋባቸው የህዝብ ኃብቶችም መውደማቸው አግባብ አለመሆኑን ተናግረዋል። ክልሉ ሊለወጥና ሊለማ የሚችለው የክልሉ ወጣቶች ለሠላም በጋራ ዘብ ሲቆሙ እንደሆነም አስምረውበታል። አሁን ወደ
[]
[ "TAGS\n#license-mit #region-us \n" ]
7cffe68258589932209921818b9f9e56324850e3
oLMpics README
KevinZ/oLMpics
[ "region:us" ]
2022-04-18T01:14:53+00:00
{}
2022-04-19T17:08:06+00:00
[]
[]
TAGS #region-us
oLMpics README
[]
[ "TAGS\n#region-us \n" ]
976943672aec93d411e31de606fc103e4aa6073b
鸟类400.物种图像分类 58388训练集,2000测试测试集,2000验证图像224X224X3 jpg格式 400种鸟类的数据集。58388张训练图像、2000张测试图像(每种5张图像)和2000张验证图像(每种5张图像)。这是一个非常高质量的数据集,每张图像中只有一只鸟,鸟通常占据图像中至少50%的像素。因此,即使是一个中等复杂的模型也能在90%的范围内实现训练和测试精度。 所有图像均为jpg格式的224 X 224 X 3彩色图像。数据集包括列车集、测试集和验证集。每套包含400个子目录,每种鸟类一个。如果使用Keras ImageDataGenerator,则数据结构非常方便。flowfromdirectory创建列车、测试和有效数据生成器。数据集还包括一个鸟类物种档案。csv。此cvs文件包含三列。“文件路径”列包含图像文件的文件路径。“标签”列包含与图像文件关联的类名。鸟类种类。如果使用df=pandas读入csv文件。birdscsv(Bird Species.csv)将创建一个pandas数据帧,然后可以将其拆分为traindf、testdf和validdf数据帧,以创建您自己的数据划分为train、test和validdf数据集。 注:数据集中的测试和验证图像是手工选择的“最佳”图像,因此使用这些数据集与创建自己的测试和验证集相比,您的模型可能会获得最高的准确度分数。然而,就看不见的图像上的模型性能而言,后一种情况更为准确。 这些图片是通过网络搜索按物种名称收集的。下载一个物种的图像文件后,使用我开发的python duplicate image detector程序检查其重复图像。删除所有检测到的重复项,以防止它们在训练集、测试集和验证集之间成为共同的图像。 之后,对图像进行裁剪,使鸟占据图像中至少50%的像素。然后,这些图像以jpg格式调整为224x224 X3。裁剪确保了当CNN对其进行处理时,图像中有足够的信息来创建高度准确的分类器。即使是一个中等稳健的模型,也应在高90%的范围内实现训练、验证和测试精度。由于数据集很大,我建议您尝试使用150 X 150 X3的模型和图像大小进行训练,以减少训练时间。所有文件也从每个物种的一个开始按顺序编号。所以测试图像被命名为1。jpg至5。jpg。对于验证图像也是如此。训练图像也用“零”填充顺序编号。例如001。jpg,002。jpg…010。jpg,011。jpg…。。099.jpg,100jpg,102。当与python文件函数和目录中的Keras流一起使用时,zero的填充保留了文件顺序。 训练集是不平衡的,每个物种有不同数量的文件。然而,每个物种至少有120个训练图像文件。这种不平衡并没有影响我的内核分类器,因为它在测试集上达到了98%以上的准确率。 数据集中一个显著的不平衡是雄性物种图像与雌性物种图像的比例。大约85%的图片是男性的,15%是女性的。典型的雄性动物的肤色要多样化得多,而一个物种的雌性动物通常是平淡无奇的。因此,男性和女性的形象可能看起来完全不同。几乎所有的测试和验证图像都来自该物种的雄性。因此,分类器可能无法在雌性物种图像上表现良好。
student/birds_400
[ "region:us" ]
2022-04-18T02:13:37+00:00
{}
2022-04-18T02:15:55+00:00
[]
[]
TAGS #region-us
鸟类400.物种图像分类 58388训练集,2000测试测试集,2000验证图像224X224X3 jpg格式 400种鸟类的数据集。58388张训练图像、2000张测试图像(每种5张图像)和2000张验证图像(每种5张图像)。这是一个非常高质量的数据集,每张图像中只有一只鸟,鸟通常占据图像中至少50%的像素。因此,即使是一个中等复杂的模型也能在90%的范围内实现训练和测试精度。 所有图像均为jpg格式的224 X 224 X 3彩色图像。数据集包括列车集、测试集和验证集。每套包含400个子目录,每种鸟类一个。如果使用Keras ImageDataGenerator,则数据结构非常方便。flowfromdirectory创建列车、测试和有效数据生成器。数据集还包括一个鸟类物种档案。csv。此cvs文件包含三列。“文件路径”列包含图像文件的文件路径。“标签”列包含与图像文件关联的类名。鸟类种类。如果使用df=pandas读入csv文件。birdscsv(Bird URL)将创建一个pandas数据帧,然后可以将其拆分为traindf、testdf和validdf数据帧,以创建您自己的数据划分为train、test和validdf数据集。 注:数据集中的测试和验证图像是手工选择的“最佳”图像,因此使用这些数据集与创建自己的测试和验证集相比,您的模型可能会获得最高的准确度分数。然而,就看不见的图像上的模型性能而言,后一种情况更为准确。 这些图片是通过网络搜索按物种名称收集的。下载一个物种的图像文件后,使用我开发的python duplicate image detector程序检查其重复图像。删除所有检测到的重复项,以防止它们在训练集、测试集和验证集之间成为共同的图像。 之后,对图像进行裁剪,使鸟占据图像中至少50%的像素。然后,这些图像以jpg格式调整为224x224 X3。裁剪确保了当CNN对其进行处理时,图像中有足够的信息来创建高度准确的分类器。即使是一个中等稳健的模型,也应在高90%的范围内实现训练、验证和测试精度。由于数据集很大,我建议您尝试使用150 X 150 X3的模型和图像大小进行训练,以减少训练时间。所有文件也从每个物种的一个开始按顺序编号。所以测试图像被命名为1。jpg至5。jpg。对于验证图像也是如此。训练图像也用“零”填充顺序编号。例如001。jpg,002。jpg…010。jpg,011。jpg…。。URL,100jpg,102。当与python文件函数和目录中的Keras流一起使用时,zero的填充保留了文件顺序。 训练集是不平衡的,每个物种有不同数量的文件。然而,每个物种至少有120个训练图像文件。这种不平衡并没有影响我的内核分类器,因为它在测试集上达到了98%以上的准确率。 数据集中一个显著的不平衡是雄性物种图像与雌性物种图像的比例。大约85%的图片是男性的,15%是女性的。典型的雄性动物的肤色要多样化得多,而一个物种的雌性动物通常是平淡无奇的。因此,男性和女性的形象可能看起来完全不同。几乎所有的测试和验证图像都来自该物种的雄性。因此,分类器可能无法在雌性物种图像上表现良好。
[]
[ "TAGS\n#region-us \n" ]
254dd05ce1dd064b434436fb491836b3b489fd9b
CUB200-2011数据集介绍: 该数据集由加州理工学院再2010年提出的细粒度数据集,也是目前细粒度分类识别研究的基准图像数据集。 该数据集共有11788张鸟类图像,包含200类鸟类子类,其中训练数据集有5994张图像,测试集有5794张图像,每张图像均提供了图像类标记信息,图像中鸟的bounding box,鸟的关键part信息,以及鸟类的属性信息,数据集如下图所示。 下载的数据集中,包含了如下文件: bounding_boxes.txt;classes.txt;image_class_labels.txt; images.txt; train_test_split.txt. 其中,bounding_boxes.txt为图像中鸟类的边界框信息;classes.txt为鸟类的类别信息,共有200类; image_class_labels.txt为图像标签和所属类别标签信息;images.txt为图像的标签和图像路径信息;train_test_split.txt为训练集和测试集划分。 本博客主要是根据train_test_split.txt文件和images.txt文件将原始下载的CUB200-2011数据集划分为训练集和测试集。在深度学习Pytorch框架下采用ImageFolder和DataLoader读取数据集较为方便。相关的python代码如下: (1) CUB200-2011训练集和测试集划分代码 # *_*coding: utf-8 *_* # author --liming-- """ 读取images.txt文件,获得每个图像的标签 读取train_test_split.txt文件,获取每个图像的train, test标签.其中1为训练,0为测试. """ import os import shutil import numpy as np import config import time time_start = time.time() # 文件路径 path_images = config.path + 'images.txt' path_split = config.path + 'train_test_split.txt' trian_save_path = config.path + 'dataset/train/' test_save_path = config.path + 'dataset/test/' # 读取images.txt文件 images = [] with open(path_images,'r') as f: for line in f: images.append(list(line.strip('\n').split(','))) # 读取train_test_split.txt文件 split = [] with open(path_split, 'r') as f_: for line in f_: split.append(list(line.strip('\n').split(','))) # 划分 num = len(images) # 图像的总个数 for k in range(num): file_name = images[k][0].split(' ')[1].split('/')[0] aaa = int(split[k][0][-1]) if int(split[k][0][-1]) == 1: # 划分到训练集 #判断文件夹是否存在 if os.path.isdir(trian_save_path + file_name): shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], trian_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1]) else: os.makedirs(trian_save_path + file_name) shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], trian_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1]) print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1]) else: #判断文件夹是否存在 if os.path.isdir(test_save_path + file_name): aaaa = config.path + 'images/' + images[k][0].split(' ')[1] bbbb = test_save_path+file_name+'/'+images[k][0].split(' ')[1] shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], test_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1]) else: os.makedirs(test_save_path + file_name) shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], test_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1]) print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1]) time_end = time.time() print('CUB200训练集和测试集划分完毕, 耗时%s!!' % (time_end - time_start)) config文件 # *_*coding: utf-8 *_* # author --liming-- path = '/media/lm/C3F680DFF08EB695/细粒度数据集/birds/CUB200/CUB_200_2011/' ROOT_TRAIN = path + 'images/train/' ROOT_TEST = path + 'images/test/' BATCH_SIZE = 16 (2) 利用Pytorch方式读取数据 # *_*coding: utf-8 *_* # author --liming-- """ 用于已下载数据集的转换,便于pytorch的读取 """ import torch import torchvision import config from torchvision import datasets, transforms data_transform = transforms.Compose([ transforms.Resize(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) def train_data_load(): # 训练集 root_train = config.ROOT_TRAIN train_dataset = torchvision.datasets.ImageFolder(root_train, transform=data_transform) CLASS = train_dataset.class_to_idx print('训练数据label与文件名的关系:', CLASS) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=config.BATCH_SIZE, shuffle=True) return CLASS, train_loader def test_data_load(): # 测试集 root_test = config.ROOT_TEST test_dataset = torchvision.datasets.ImageFolder(root_test, transform=data_transform) CLASS = test_dataset.class_to_idx print('测试数据label与文件名的关系:',CLASS) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=config.BATCH_SIZE, shuffle=True) return CLASS, test_loader if __name__ == '__main___': train_data_load() test_data_load()
student/CUB_birds_200_2011
[ "region:us" ]
2022-04-18T02:19:41+00:00
{}
2022-04-18T02:21:03+00:00
[]
[]
TAGS #region-us
CUB200-2011数据集介绍: 该数据集由加州理工学院再2010年提出的细粒度数据集,也是目前细粒度分类识别研究的基准图像数据集。 该数据集共有11788张鸟类图像,包含200类鸟类子类,其中训练数据集有5994张图像,测试集有5794张图像,每张图像均提供了图像类标记信息,图像中鸟的bounding box,鸟的关键part信息,以及鸟类的属性信息,数据集如下图所示。 下载的数据集中,包含了如下文件: bounding_boxes.txt;URL;image_class_labels.txt; URL; train_test_split.txt. 其中,bounding_boxes.txt为图像中鸟类的边界框信息;classes.txt为鸟类的类别信息,共有200类; image_class_labels.txt为图像标签和所属类别标签信息;images.txt为图像的标签和图像路径信息;train_test_split.txt为训练集和测试集划分。 本博客主要是根据train_test_split.txt文件和images.txt文件将原始下载的CUB200-2011数据集划分为训练集和测试集。在深度学习Pytorch框架下采用ImageFolder和DataLoader读取数据集较为方便。相关的python代码如下: (1) CUB200-2011训练集和测试集划分代码 # *_*coding: utf-8 *_* # author --liming-- """ 读取images.txt文件,获得每个图像的标签 读取train_test_split.txt文件,获取每个图像的train, test标签.其中1为训练,0为测试. """ import os import shutil import numpy as np import config import time time_start = URL() # 文件路径 path_images = URL + 'URL' path_split = URL + 'train_test_split.txt' trian_save_path = URL + 'dataset/train/' test_save_path = URL + 'dataset/test/' # 读取images.txt文件 images = [] with open(path_images,'r') as f: for line in f: URL(list(URL('\n').split(','))) # 读取train_test_split.txt文件 split = [] with open(path_split, 'r') as f_: for line in f_: URL(list(URL('\n').split(','))) # 划分 num = len(images) # 图像的总个数 for k in range(num): file_name = images[k][0].split(' ')[1].split('/')[0] aaa = int(split[k][0][-1]) if int(split[k][0][-1]) == 1: # 划分到训练集 #判断文件夹是否存在 if URL(trian_save_path + file_name): URL(URL + 'images/' + images[k][0].split(' ')[1], trian_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1]) else: os.makedirs(trian_save_path + file_name) URL(URL + 'images/' + images[k][0].split(' ')[1], trian_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1]) print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1]) else: #判断文件夹是否存在 if URL(test_save_path + file_name): aaaa = URL + 'images/' + images[k][0].split(' ')[1] bbbb = test_save_path+file_name+'/'+images[k][0].split(' ')[1] URL(URL + 'images/' + images[k][0].split(' ')[1], test_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1]) else: os.makedirs(test_save_path + file_name) URL(URL + 'images/' + images[k][0].split(' ')[1], test_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1]) print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1]) time_end = URL() print('CUB200训练集和测试集划分完毕, 耗时%s!!' % (time_end - time_start)) config文件 # *_*coding: utf-8 *_* # author --liming-- path = '/media/lm/C3F680DFF08EB695/细粒度数据集/birds/CUB200/CUB_200_2011/' ROOT_TRAIN = path + 'images/train/' ROOT_TEST = path + 'images/test/' BATCH_SIZE = 16 (2) 利用Pytorch方式读取数据 # *_*coding: utf-8 *_* # author --liming-- """ 用于已下载数据集的转换,便于pytorch的读取 """ import torch import torchvision import config from torchvision import datasets, transforms data_transform = transforms.Compose([ transforms.Resize(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) def train_data_load(): # 训练集 root_train = config.ROOT_TRAIN train_dataset = torchvision.datasets.ImageFolder(root_train, transform=data_transform) CLASS = train_dataset.class_to_idx print('训练数据label与文件名的关系:', CLASS) train_loader = URL.DataLoader(train_dataset, batch_size=config.BATCH_SIZE, shuffle=True) return CLASS, train_loader def test_data_load(): # 测试集 root_test = config.ROOT_TEST test_dataset = torchvision.datasets.ImageFolder(root_test, transform=data_transform) CLASS = test_dataset.class_to_idx print('测试数据label与文件名的关系:',CLASS) test_loader = URL.DataLoader(test_dataset, batch_size=config.BATCH_SIZE, shuffle=True) return CLASS, test_loader if __name__ == '__main___': train_data_load() test_data_load()
[ "# *_*coding: utf-8 *_*\n # author --liming--\n \n\"\"\"\n读取images.txt文件,获得每个图像的标签\n读取train_test_split.txt文件,获取每个图像的train, test标签.其中1为训练,0为测试.\n\"\"\"\n \nimport os\nimport shutil\nimport numpy as np\nimport config\nimport time\n \ntime_start = URL()", "# 文件路径\npath_images = URL + 'URL'\npath_split = URL + 'train_test_split.txt'\ntrian_save_path = URL + 'dataset/train/'\ntest_save_path = URL + 'dataset/test/'", "# 读取images.txt文件\nimages = []\nwith open(path_images,'r') as f:\n for line in f:\n URL(list(URL('\\n').split(',')))", "# 读取train_test_split.txt文件\nsplit = []\nwith open(path_split, 'r') as f_:\n for line in f_:\n URL(list(URL('\\n').split(',')))", "# 划分\nnum = len(images) # 图像的总个数\nfor k in range(num):\n file_name = images[k][0].split(' ')[1].split('/')[0]\n aaa = int(split[k][0][-1])\n if int(split[k][0][-1]) == 1: # 划分到训练集\n #判断文件夹是否存在\n if URL(trian_save_path + file_name):\n URL(URL + 'images/' + images[k][0].split(' ')[1], trian_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1])\n else:\n os.makedirs(trian_save_path + file_name)\n URL(URL + 'images/' + images[k][0].split(' ')[1], trian_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1])\n print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1])\n else:\n #判断文件夹是否存在\n if URL(test_save_path + file_name):\n aaaa = URL + 'images/' + images[k][0].split(' ')[1]\n bbbb = test_save_path+file_name+'/'+images[k][0].split(' ')[1]\n URL(URL + 'images/' + images[k][0].split(' ')[1], test_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1])\n else:\n os.makedirs(test_save_path + file_name)\n URL(URL + 'images/' + images[k][0].split(' ')[1], test_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1])\n print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1])\n \ntime_end = URL()\nprint('CUB200训练集和测试集划分完毕, 耗时%s!!' % (time_end - time_start))\nconfig文件", "# *_*coding: utf-8 *_*", "# author --liming--\n \npath = '/media/lm/C3F680DFF08EB695/细粒度数据集/birds/CUB200/CUB_200_2011/'\n \nROOT_TRAIN = path + 'images/train/'\nROOT_TEST = path + 'images/test/'\nBATCH_SIZE = 16\n(2) 利用Pytorch方式读取数据", "# *_*coding: utf-8 *_*", "# author --liming--\n \n\"\"\"\n用于已下载数据集的转换,便于pytorch的读取\n\"\"\"\n \nimport torch\nimport torchvision\nimport config\nfrom torchvision import datasets, transforms\n \ndata_transform = transforms.Compose([\n transforms.Resize(224),\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n])\n \ndef train_data_load():\n # 训练集\n root_train = config.ROOT_TRAIN\n train_dataset = torchvision.datasets.ImageFolder(root_train,\n transform=data_transform)\n CLASS = train_dataset.class_to_idx\n print('训练数据label与文件名的关系:', CLASS)\n train_loader = URL.DataLoader(train_dataset,\n batch_size=config.BATCH_SIZE,\n shuffle=True)\n return CLASS, train_loader\n \ndef test_data_load():\n # 测试集\n root_test = config.ROOT_TEST\n test_dataset = torchvision.datasets.ImageFolder(root_test,\n transform=data_transform)\n \n CLASS = test_dataset.class_to_idx\n print('测试数据label与文件名的关系:',CLASS)\n test_loader = URL.DataLoader(test_dataset,\n batch_size=config.BATCH_SIZE,\n shuffle=True)\n return CLASS, test_loader\n \nif __name__ == '__main___':\n train_data_load()\n test_data_load()" ]
[ "TAGS\n#region-us \n", "# *_*coding: utf-8 *_*\n # author --liming--\n \n\"\"\"\n读取images.txt文件,获得每个图像的标签\n读取train_test_split.txt文件,获取每个图像的train, test标签.其中1为训练,0为测试.\n\"\"\"\n \nimport os\nimport shutil\nimport numpy as np\nimport config\nimport time\n \ntime_start = URL()", "# 文件路径\npath_images = URL + 'URL'\npath_split = URL + 'train_test_split.txt'\ntrian_save_path = URL + 'dataset/train/'\ntest_save_path = URL + 'dataset/test/'", "# 读取images.txt文件\nimages = []\nwith open(path_images,'r') as f:\n for line in f:\n URL(list(URL('\\n').split(',')))", "# 读取train_test_split.txt文件\nsplit = []\nwith open(path_split, 'r') as f_:\n for line in f_:\n URL(list(URL('\\n').split(',')))", "# 划分\nnum = len(images) # 图像的总个数\nfor k in range(num):\n file_name = images[k][0].split(' ')[1].split('/')[0]\n aaa = int(split[k][0][-1])\n if int(split[k][0][-1]) == 1: # 划分到训练集\n #判断文件夹是否存在\n if URL(trian_save_path + file_name):\n URL(URL + 'images/' + images[k][0].split(' ')[1], trian_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1])\n else:\n os.makedirs(trian_save_path + file_name)\n URL(URL + 'images/' + images[k][0].split(' ')[1], trian_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1])\n print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1])\n else:\n #判断文件夹是否存在\n if URL(test_save_path + file_name):\n aaaa = URL + 'images/' + images[k][0].split(' ')[1]\n bbbb = test_save_path+file_name+'/'+images[k][0].split(' ')[1]\n URL(URL + 'images/' + images[k][0].split(' ')[1], test_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1])\n else:\n os.makedirs(test_save_path + file_name)\n URL(URL + 'images/' + images[k][0].split(' ')[1], test_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1])\n print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1])\n \ntime_end = URL()\nprint('CUB200训练集和测试集划分完毕, 耗时%s!!' % (time_end - time_start))\nconfig文件", "# *_*coding: utf-8 *_*", "# author --liming--\n \npath = '/media/lm/C3F680DFF08EB695/细粒度数据集/birds/CUB200/CUB_200_2011/'\n \nROOT_TRAIN = path + 'images/train/'\nROOT_TEST = path + 'images/test/'\nBATCH_SIZE = 16\n(2) 利用Pytorch方式读取数据", "# *_*coding: utf-8 *_*", "# author --liming--\n \n\"\"\"\n用于已下载数据集的转换,便于pytorch的读取\n\"\"\"\n \nimport torch\nimport torchvision\nimport config\nfrom torchvision import datasets, transforms\n \ndata_transform = transforms.Compose([\n transforms.Resize(224),\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n])\n \ndef train_data_load():\n # 训练集\n root_train = config.ROOT_TRAIN\n train_dataset = torchvision.datasets.ImageFolder(root_train,\n transform=data_transform)\n CLASS = train_dataset.class_to_idx\n print('训练数据label与文件名的关系:', CLASS)\n train_loader = URL.DataLoader(train_dataset,\n batch_size=config.BATCH_SIZE,\n shuffle=True)\n return CLASS, train_loader\n \ndef test_data_load():\n # 测试集\n root_test = config.ROOT_TEST\n test_dataset = torchvision.datasets.ImageFolder(root_test,\n transform=data_transform)\n \n CLASS = test_dataset.class_to_idx\n print('测试数据label与文件名的关系:',CLASS)\n test_loader = URL.DataLoader(test_dataset,\n batch_size=config.BATCH_SIZE,\n shuffle=True)\n return CLASS, test_loader\n \nif __name__ == '__main___':\n train_data_load()\n test_data_load()" ]
a125caa5d8ea8d27952f7fbe1b7f187ddf6a39bb
# Citation If you use the dataset, please cite the paper: @article{10.1007/s10579-021-09568-y, year = {2022}, title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}}, author = {Baykara, Batuhan and Güngör, Tunga}, journal = {Language Resources and Evaluation}, issn = {1574-020X}, doi = {10.1007/s10579-021-09568-y}, pages = {1--35}}
batubayk/TR-News
[ "task_categories:summarization", "task_categories:text-classification", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:100K<n<1M", "language:tr", "region:us" ]
2022-04-18T16:23:02+00:00
{"language": ["tr"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization", "text-classification", "text-generation", "text2text-generation"], "pretty_name": "TR-News"}
2023-03-04T22:39:35+00:00
[]
[ "tr" ]
TAGS #task_categories-summarization #task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Turkish #region-us
If you use the dataset, please cite the paper: @article{10.1007/s10579-021-09568-y, year = {2022}, title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}}, author = {Baykara, Batuhan and Güngör, Tunga}, journal = {Language Resources and Evaluation}, issn = {1574-020X}, doi = {10.1007/s10579-021-09568-y}, pages = {1--35}}
[]
[ "TAGS\n#task_categories-summarization #task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Turkish #region-us \n" ]
08d5a00cd764b7116bbc8422818fdfa57c7b413b
# Citation If you use the dataset, please cite the paper: @article{10.1007/s10579-021-09568-y, year = {2022}, title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}}, author = {Baykara, Batuhan and Güngör, Tunga}, journal = {Language Resources and Evaluation}, issn = {1574-020X}, doi = {10.1007/s10579-021-09568-y}, pages = {1--35}}
batubayk/HU-News
[ "task_categories:summarization", "task_categories:text-classification", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:100K<n<1M", "language:hu", "region:us" ]
2022-04-18T16:23:27+00:00
{"language": ["hu"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization", "text-classification", "text-generation", "text2text-generation"], "pretty_name": "HU-News"}
2023-03-04T22:40:26+00:00
[]
[ "hu" ]
TAGS #task_categories-summarization #task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Hungarian #region-us
If you use the dataset, please cite the paper: @article{10.1007/s10579-021-09568-y, year = {2022}, title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}}, author = {Baykara, Batuhan and Güngör, Tunga}, journal = {Language Resources and Evaluation}, issn = {1574-020X}, doi = {10.1007/s10579-021-09568-y}, pages = {1--35}}
[]
[ "TAGS\n#task_categories-summarization #task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Hungarian #region-us \n" ]
13e24921a5d4e04d6ab4bf21fada26c530e3db1f
# Tweet Emotion Intensity Dataset ## Papers: * Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada. * WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017, Copenhagen, Denmark.
stepp1/tweet_emotion_intensity
[ "region:us" ]
2022-04-18T16:32:33+00:00
{}
2022-04-18T19:49:56+00:00
[]
[]
TAGS #region-us
# Tweet Emotion Intensity Dataset ## Papers: * Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada. * WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017, Copenhagen, Denmark.
[ "# Tweet Emotion Intensity Dataset", "## Papers: \n\n* Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada.\n\n* WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017, Copenhagen, Denmark." ]
[ "TAGS\n#region-us \n", "# Tweet Emotion Intensity Dataset", "## Papers: \n\n* Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada.\n\n* WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017, Copenhagen, Denmark." ]
e078606bba4ff4735ffac758f8fb5e9d9045e4ba
# Dataset Card for the-reddit-irl-dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-irl-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditirldataset) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditirldataset) ### Dataset Summary Data from the humour subreddits /r/meirl and /r/me_irl, up to Apr 1 2022. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
SocialGrep/the-reddit-irl-dataset
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-18T18:18:54+00:00
{"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"]}
2022-07-01T16:52:22+00:00
[]
[ "en" ]
TAGS #annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for the-reddit-irl-dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Licensing Information ## Dataset Description - Homepage: URL - Point of Contact: Website ### Dataset Summary Data from the humour subreddits /r/meirl and /r/me_irl, up to Apr 1 2022. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'URL': the human-readable name of the data point's host subreddit. - 'URL': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
[ "# Dataset Card for the-reddit-irl-dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information", "## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website", "### Dataset Summary\n\nData from the humour subreddits /r/meirl and /r/me_irl, up to Apr 1 2022.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.", "### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.", "## Additional Information", "### Licensing Information\n\nCC-BY v4.0" ]
[ "TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for the-reddit-irl-dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information", "## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website", "### Dataset Summary\n\nData from the humour subreddits /r/meirl and /r/me_irl, up to Apr 1 2022.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.", "### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.", "## Additional Information", "### Licensing Information\n\nCC-BY v4.0" ]
4544b33bcc8077384d02b422f91b5723b890f53c
# Dataset Card for "squad" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 33.51 MB - **Size of the generated dataset:** 85.75 MB - **Total amount of disk used:** 119.27 MB ### Dataset Summary Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure We show detailed information for up to 5 configurations of the dataset. ### Data Instances #### plain_text - **Size of downloaded dataset files:** 33.51 MB - **Size of the generated dataset:** 85.75 MB - **Total amount of disk used:** 119.27 MB An example of 'train' looks as follows. ``` { "answers": { "answer_start": [1], "text": ["This is a test text"] }, "context": "This is a test context.", "id": 1, "question": "Is this a test?", } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `int32` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name |train|validation| |----------|----:|---------:| |plain_text|---| ---| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
Lexi/spanextract
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-18T19:06:21+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "paperswithcode_id": "squad", "pretty_name": "SQuAD"}
2022-10-25T09:08:42+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #region-us
Dataset Card for "squad" ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 33.51 MB * Size of the generated dataset: 85.75 MB * Total amount of disk used: 119.27 MB ### Dataset Summary Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- We show detailed information for up to 5 configurations of the dataset. ### Data Instances #### plain\_text * Size of downloaded dataset files: 33.51 MB * Size of the generated dataset: 85.75 MB * Total amount of disk used: 119.27 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'id': a 'int32' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ''' ### Contributions Thanks to @lewtun, @albertvillanova, @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 33.51 MB\n* Size of the generated dataset: 85.75 MB\n* Total amount of disk used: 119.27 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'id': a 'int32' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\n'''", "### Contributions\n\n\nThanks to @lewtun, @albertvillanova, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 33.51 MB\n* Size of the generated dataset: 85.75 MB\n* Total amount of disk used: 119.27 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'id': a 'int32' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\n'''", "### Contributions\n\n\nThanks to @lewtun, @albertvillanova, @patrickvonplaten, @thomwolf for adding this dataset." ]
f549503ef55b76632d02538edb6987ea9f96f82c
Paper: [Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision](https://arxiv.org/abs/2204.03685) Authors: Wanyu Du*, Zae Myung Kim*, Vipul Raheja, Dhruv Kumar, Dongyeop Kang Github repo: https://github.com/vipulraheja/IteraTeR Watch our system demonstration below! [![demo](https://yt-embed.herokuapp.com/embed?v=lK08tIpEoaE)](https://www.youtube.com/watch?v=lK08tIpEoaE)
wanyu/IteraTeR_v2
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:apache-2.0", "conditional-text-generation", "text-editing", "arxiv:2204.03685", "region:us" ]
2022-04-18T19:09:17+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_v2", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]}
2022-10-24T17:58:08+00:00
[ "2204.03685" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2204.03685 #region-us
Paper: Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision Authors: Wanyu Du*, Zae Myung Kim*, Vipul Raheja, Dhruv Kumar, Dongyeop Kang Github repo: URL Watch our system demonstration below! ![demo](URL
[]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2204.03685 #region-us \n" ]
77396f1031920d1c116ec6d63ac397ff6aa492d3
## Dataset Description - **Homepage:** [Cat and Dog](https://www.kaggle.com/datasets/tongpython/cat-and-dog) - **Download Size** 217.30 MiB - **Generated Size** 198.89 MiB - **Total Size** 416.20 MiB ### Dataset Summary A dataset from [kaggle](https://www.kaggle.com/datasets/tongpython/cat-and-dog) with duplicate data removed. ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. ### Class Label Mappings: ``` { "cat": 0, "dog": 1, } ``` ### Data Splits | | train | test | |---------------|-------|-----:| | # of examples | 8000 | 2000 | ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/Cat_and_Dog") >>> dataset DatasetDict({ train: Dataset({ features: ['image', 'labels'], num_rows: 8000 }) test: Dataset({ features: ['image', 'labels'], num_rows: 2000 }) }) >>> dataset["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=2, names=['cat', 'dog'], id=None)} ```
Bingsu/Cat_and_Dog
[ "task_categories:image-classification", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc0-1.0", "region:us" ]
2022-04-19T01:23:06+00:00
{"language": ["en"], "license": ["cc0-1.0"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "Cat and Dog", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog"}}}}], "splits": [{"name": "train", "num_bytes": 166451650.0, "num_examples": 8000}, {"name": "test", "num_bytes": 42101650.0, "num_examples": 2000}], "download_size": 227859268, "dataset_size": 208553300.0, "size_in_bytes": 436412568.0}}
2023-01-26T10:48:25+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc0-1.0 #region-us
Dataset Description ------------------- * Homepage: Cat and Dog * Download Size 217.30 MiB * Generated Size 198.89 MiB * Total Size 416.20 MiB ### Dataset Summary A dataset from kaggle with duplicate data removed. ### Data Fields The data instances have the following fields: * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'labels': an 'int' classification label. ### Class Label Mappings: ### Data Splits
[ "### Dataset Summary\n\n\nA dataset from kaggle with duplicate data removed.", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.", "### Class Label Mappings:", "### Data Splits" ]
[ "TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nA dataset from kaggle with duplicate data removed.", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.", "### Class Label Mappings:", "### Data Splits" ]
48fdfd7ab1dbc1a62e4e8a8b9f4c360259d51d3c
## Dataset Description - **Homepage:** [Korean Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset) - **Repository:** [Kyubyong/kss](https://github.com/Kyubyong/kss) - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A # Description of the original author ### KSS Dataset: Korean Single speaker Speech Dataset KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean. ### File Format Each line in `transcript.v.1.3.txt` is delimited by `|` into six fields. - A. Audio file path - B. Original script - C. Expanded script - D. Decomposed script - E. Audio duration (seconds) - F. English translation e.g., 1/1_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes. ### Specification - Audio File Type: wav - Total Running Time: 12+ hours - Sample Rate: 44,100 KHZ - Number of Audio Files: 12,853 - Sources - |1| [Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.](https://www.amazon.com/500-Basic-Korean-Verbs-Comprehensive/dp/0804846057/ref=sr_1_1?s=books&ie=UTF8&qid=1522911616&sr=1-1&keywords=kyubyong+park)| - |2| [Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.](http://www.hanbooks.com/500bakoad.html)| - |3| [Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.](https://www.amazon.com/Essential-Korean-Vocabulary-Phrases-Fluently/dp/0804843252/ref=sr_1_3?s=books&ie=UTF8&qid=1522911806&sr=1-3&keywords=kyubyong+park)| - |4| [Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.](https://www.amazon.com/Tuttle-Learners-Korean-English-Dictionary-Essential/dp/0804841500/ref=sr_1_8?s=books&ie=UTF8&qid=1522911806&sr=1-8&keywords=kyubyong+park)| ### License NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this. ### Citation If you want to cite KSS Dataset, please refer to this: Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset, 2018 ### Reference Check out [this](https://github.com/Kyubyong/kss) for a project using this KSS Dataset. ### Contact You can contact me at [email protected]. April, 2018. Kyubyong Park ### Dataset Summary 12,853 Korean audio files with transcription. ### Supported Tasks and Leaderboards text-to-speech ### Languages korean ## Dataset Structure ### Data Instances ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/KSS_Dataset") >>> dataset["train"].features {'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None), 'original_script': Value(dtype='string', id=None), 'expanded_script': Value(dtype='string', id=None), 'decomposed_script': Value(dtype='string', id=None), 'duration': Value(dtype='float32', id=None), 'english_translation': Value(dtype='string', id=None)} ``` ```python >>> dataset["train"][0] {'audio': {'path': None, 'array': array([ 0.00000000e+00, 3.05175781e-05, -4.57763672e-05, ..., 0.00000000e+00, -3.05175781e-05, -3.05175781e-05]), 'sampling_rate': 44100}, 'original_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.', 'expanded_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.', 'decomposed_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.', 'duration': 3.5, 'english_translation': 'He seemed to be pretending to be okay.'} ``` ### Data Splits | | train | |---------------|------:| | # of examples | 12853 |
Bingsu/KSS_Dataset
[ "task_categories:text-to-speech", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ko", "license:cc-by-nc-sa-4.0", "region:us" ]
2022-04-19T05:59:21+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ko"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-to-speech"], "task_ids": [], "pretty_name": "Korean Single Speaker Speech Dataset"}
2022-07-01T23:10:10+00:00
[]
[ "ko" ]
TAGS #task_categories-text-to-speech #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Korean #license-cc-by-nc-sa-4.0 #region-us
Dataset Description ------------------- * Homepage: Korean Single Speaker Speech Dataset * Repository: Kyubyong/kss * Paper: N/A * Leaderboard: N/A * Point of Contact: N/A Description of the original author ================================== ### KSS Dataset: Korean Single speaker Speech Dataset KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean. ### File Format Each line in 'transcript.v.1.3.txt' is delimited by '|' into six fields. * A. Audio file path * B. Original script * C. Expanded script * D. Decomposed script * E. Audio duration (seconds) * F. English translation e.g., 1/1\_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes. ### Specification * Audio File Type: wav * Total Running Time: 12+ hours * Sample Rate: 44,100 KHZ * Number of Audio Files: 12,853 * Sources + |1| Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.| + |2| Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.| + |3| Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.| + |4| Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.| ### License NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this. If you want to cite KSS Dataset, please refer to this: Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, URL 2018 ### Reference Check out this for a project using this KSS Dataset. ### Contact You can contact me at kbpark.linguist@URL. April, 2018. Kyubyong Park ### Dataset Summary 12,853 Korean audio files with transcription. ### Supported Tasks and Leaderboards text-to-speech ### Languages korean Dataset Structure ----------------- ### Data Instances ### Data Splits
[ "### KSS Dataset: Korean Single speaker Speech Dataset\n\n\nKSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean.", "### File Format\n\n\nEach line in 'transcript.v.1.3.txt' is delimited by '|' into six fields.\n\n\n* A. Audio file path\n* B. Original script\n* C. Expanded script\n* D. Decomposed script\n* E. Audio duration (seconds)\n* F. English translation\n\n\ne.g.,\n\n\n1/1\\_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes.", "### Specification\n\n\n* Audio File Type: wav\n* Total Running Time: 12+ hours\n* Sample Rate: 44,100 KHZ\n* Number of Audio Files: 12,853\n* Sources\n\t+ |1| Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.|\n\t+ |2| Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.|\n\t+ |3| Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.|\n\t+ |4| Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.|", "### License\n\n\nNC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this.\n\n\nIf you want to cite KSS Dataset, please refer to this:\n\n\nKyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, URL 2018", "### Reference\n\n\nCheck out this for a project using this KSS Dataset.", "### Contact\n\n\nYou can contact me at kbpark.linguist@URL.\n\n\nApril, 2018.\n\n\nKyubyong Park", "### Dataset Summary\n\n\n12,853 Korean audio files with transcription.", "### Supported Tasks and Leaderboards\n\n\ntext-to-speech", "### Languages\n\n\nkorean\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Splits" ]
[ "TAGS\n#task_categories-text-to-speech #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Korean #license-cc-by-nc-sa-4.0 #region-us \n", "### KSS Dataset: Korean Single speaker Speech Dataset\n\n\nKSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean.", "### File Format\n\n\nEach line in 'transcript.v.1.3.txt' is delimited by '|' into six fields.\n\n\n* A. Audio file path\n* B. Original script\n* C. Expanded script\n* D. Decomposed script\n* E. Audio duration (seconds)\n* F. English translation\n\n\ne.g.,\n\n\n1/1\\_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes.", "### Specification\n\n\n* Audio File Type: wav\n* Total Running Time: 12+ hours\n* Sample Rate: 44,100 KHZ\n* Number of Audio Files: 12,853\n* Sources\n\t+ |1| Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.|\n\t+ |2| Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.|\n\t+ |3| Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.|\n\t+ |4| Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.|", "### License\n\n\nNC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this.\n\n\nIf you want to cite KSS Dataset, please refer to this:\n\n\nKyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, URL 2018", "### Reference\n\n\nCheck out this for a project using this KSS Dataset.", "### Contact\n\n\nYou can contact me at kbpark.linguist@URL.\n\n\nApril, 2018.\n\n\nKyubyong Park", "### Dataset Summary\n\n\n12,853 Korean audio files with transcription.", "### Supported Tasks and Leaderboards\n\n\ntext-to-speech", "### Languages\n\n\nkorean\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Splits" ]
199e4ae37915137c555b1765c01477c216287d34
# FLEURS ## Dataset Description - **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition) - **Paper:** [FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech](https://arxiv.org/abs/2205.12446) - **Total amount of disk used:** ca. 350 GB Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193). We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages. Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas: - **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh* - **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian* - **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek* - **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu* - **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu* - **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese* - **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean* ## How to use & Supported Tasks ### How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi): ```python from datasets import load_dataset fleurs = load_dataset("google/fleurs", "hi_in", split="train") ``` Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. ```python from datasets import load_dataset fleurs = load_dataset("google/fleurs", "hi_in", split="train", streaming=True) print(next(iter(fleurs))) ``` *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed). Local: ```python from datasets import load_dataset from torch.utils.data.sampler import BatchSampler, RandomSampler fleurs = load_dataset("google/fleurs", "hi_in", split="train") batch_sampler = BatchSampler(RandomSampler(fleurs), batch_size=32, drop_last=False) dataloader = DataLoader(fleurs, batch_sampler=batch_sampler) ``` Streaming: ```python from datasets import load_dataset from torch.utils.data import DataLoader fleurs = load_dataset("google/fleurs", "hi_in", split="train") dataloader = DataLoader(fleurs, batch_size=32) ``` To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets). ### Example scripts Train your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition). Fine-tune your own Language Identification models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) ### 1. Speech Recognition (ASR) ```py from datasets import load_dataset fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans # to download all data for multi-lingual fine-tuning uncomment following line # fleurs_asr = load_dataset("google/fleurs", "all") # see structure print(fleurs_asr) # load audio sample on the fly audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample transcription = fleurs_asr["train"][0]["transcription"] # first transcription # use `audio_input` and `transcription` to fine-tune your model for ASR # for analyses see language groups all_language_groups = fleurs_asr["train"].features["lang_group_id"].names lang_group_id = fleurs_asr["train"][0]["lang_group_id"] all_language_groups[lang_group_id] ``` ### 2. Language Identification LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all. ```py from datasets import load_dataset fleurs_langID = load_dataset("google/fleurs", "all") # to download all data # see structure print(fleurs_langID) # load audio sample on the fly audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample language_class = fleurs_langID["train"][0]["lang_id"] # first id class language = fleurs_langID["train"].features["lang_id"].names[language_class] # use audio_input and language_class to fine-tune your model for audio classification ``` ### 3. Retrieval Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult. ```py from datasets import load_dataset fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans # to download all data for multi-lingual fine-tuning uncomment following line # fleurs_retrieval = load_dataset("google/fleurs", "all") # see structure print(fleurs_retrieval) # load audio sample on the fly audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples # use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval ``` Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech. ## Dataset Structure We show detailed information the example configurations `af_za` of the dataset. All other configurations have the same structure. ### Data Instances **af_za** - Size of downloaded dataset files: 1.47 GB - Size of the generated dataset: 1 MB - Total amount of disk used: 1.47 GB An example of a data instance of the config `af_za` looks as follows: ``` {'id': 91, 'num_samples': 385920, 'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav', 'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav', 'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., -1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32), 'sampling_rate': 16000}, 'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin', 'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin', 'gender': 0, 'lang_id': 0, 'language': 'Afrikaans', 'lang_group_id': 3} ``` ### Data Fields The data fields are the same among all splits. - **id** (int): ID of audio sample - **num_samples** (int): Number of float values - **path** (str): Path to the audio file - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio - **raw_transcription** (str): The non-normalized transcription of the audio file - **transcription** (str): Transcription of the audio file - **gender** (int): Class id of gender - **lang_id** (int): Class id of language - **lang_group_id** (int): Class id of language group ### Data Splits Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples. ## Dataset Creation We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for train, dev and test respectively. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos). ### Discussion of Biases Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages. ### Other Known Limitations The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding. ## Additional Information All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/). ### Citation Information You can access the FLEURS paper at https://arxiv.org/abs/2205.12446. Please cite the paper when referencing the FLEURS corpus as: ``` @article{fleurs2022arxiv, title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech}, author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur}, journal={arXiv preprint arXiv:2205.12446}, url = {https://arxiv.org/abs/2205.12446}, year = {2022}, ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.
google/fleurs
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "language:afr", "language:amh", "language:ara", "language:asm", "language:ast", "language:azj", "language:bel", "language:ben", "language:bos", "language:cat", "language:ceb", "language:cmn", "language:ces", "language:cym", "language:dan", "language:deu", "language:ell", "language:eng", "language:spa", "language:est", "language:fas", "language:ful", "language:fin", "language:tgl", "language:fra", "language:gle", "language:glg", "language:guj", "language:hau", "language:heb", "language:hin", "language:hrv", "language:hun", "language:hye", "language:ind", "language:ibo", "language:isl", "language:ita", "language:jpn", "language:jav", "language:kat", "language:kam", "language:kea", "language:kaz", "language:khm", "language:kan", "language:kor", "language:ckb", "language:kir", "language:ltz", "language:lug", "language:lin", "language:lao", "language:lit", "language:luo", "language:lav", "language:mri", "language:mkd", "language:mal", "language:mon", "language:mar", "language:msa", "language:mlt", "language:mya", "language:nob", "language:npi", "language:nld", "language:nso", "language:nya", "language:oci", "language:orm", "language:ory", "language:pan", "language:pol", "language:pus", "language:por", "language:ron", "language:rus", "language:bul", "language:snd", "language:slk", "language:slv", "language:sna", "language:som", "language:srp", "language:swe", "language:swh", "language:tam", "language:tel", "language:tgk", "language:tha", "language:tur", "language:ukr", "language:umb", "language:urd", "language:uzb", "language:vie", "language:wol", "language:xho", "language:yor", "language:yue", "language:zul", "license:cc-by-4.0", "speech-recognition", "arxiv:2205.12446", "arxiv:2106.03193", "region:us" ]
2022-04-19T09:25:58+00:00
{"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["afr", "amh", "ara", "asm", "ast", "azj", "bel", "ben", "bos", "cat", "ceb", "cmn", "ces", "cym", "dan", "deu", "ell", "eng", "spa", "est", "fas", "ful", "fin", "tgl", "fra", "gle", "glg", "guj", "hau", "heb", "hin", "hrv", "hun", "hye", "ind", "ibo", "isl", "ita", "jpn", "jav", "kat", "kam", "kea", "kaz", "khm", "kan", "kor", "ckb", "kir", "ltz", "lug", "lin", "lao", "lit", "luo", "lav", "mri", "mkd", "mal", "mon", "mar", "msa", "mlt", "mya", "nob", "npi", "nld", "nso", "nya", "oci", "orm", "ory", "pan", "pol", "pus", "por", "ron", "rus", "bul", "snd", "slk", "slv", "sna", "som", "srp", "swe", "swh", "tam", "tel", "tgk", "tha", "tur", "ukr", "umb", "urd", "uzb", "vie", "wol", "xho", "yor", "yue", "zul"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.", "tags": ["speech-recognition"]}
2023-02-07T20:51:01+00:00
[ "2205.12446", "2106.03193" ]
[ "afr", "amh", "ara", "asm", "ast", "azj", "bel", "ben", "bos", "cat", "ceb", "cmn", "ces", "cym", "dan", "deu", "ell", "eng", "spa", "est", "fas", "ful", "fin", "tgl", "fra", "gle", "glg", "guj", "hau", "heb", "hin", "hrv", "hun", "hye", "ind", "ibo", "isl", "ita", "jpn", "jav", "kat", "kam", "kea", "kaz", "khm", "kan", "kor", "ckb", "kir", "ltz", "lug", "lin", "lao", "lit", "luo", "lav", "mri", "mkd", "mal", "mon", "mar", "msa", "mlt", "mya", "nob", "npi", "nld", "nso", "nya", "oci", "orm", "ory", "pan", "pol", "pus", "por", "ron", "rus", "bul", "snd", "slk", "slv", "sna", "som", "srp", "swe", "swh", "tam", "tel", "tgk", "tha", "tur", "ukr", "umb", "urd", "uzb", "vie", "wol", "xho", "yor", "yue", "zul" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-North Azerbaijani #language-Belarusian #language-Bengali #language-Bosnian #language-Catalan #language-Cebuano #language-Mandarin Chinese #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Persian #language-Fulah #language-Finnish #language-Tagalog #language-French #language-Irish #language-Galician #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kamba (Kenya) #language-Kabuverdianu #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Central Kurdish #language-Kirghiz #language-Luxembourgish #language-Ganda #language-Lingala #language-Lao #language-Lithuanian #language-Luo (Kenya and Tanzania) #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Norwegian Bokmål #language-Nepali (individual language) #language-Dutch #language-Pedi #language-Nyanja #language-Occitan (post 1500) #language-Oromo #language-Odia #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Bulgarian #language-Sindhi #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Serbian #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkish #language-Ukrainian #language-Umbundu #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Yue Chinese #language-Zulu #license-cc-by-4.0 #speech-recognition #arxiv-2205.12446 #arxiv-2106.03193 #region-us
# FLEURS ## Dataset Description - Fine-Tuning script: pytorch/speech-recognition - Paper: FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech - Total amount of disk used: ca. 350 GB Fleurs is the speech version of the FLoRes machine translation benchmark. We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages. Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas: - Western Europe: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh* - Eastern Europe: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian* - Central-Asia/Middle-East/North-Africa: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek* - Sub-Saharan Africa: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu* - South-Asia: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu* - South-East Asia: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese* - CJK languages: *Cantonese and Mandarin Chinese, Japanese, Korean* ## How to use & Supported Tasks ### How to use The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi): Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. *Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed). Local: Streaming: To find out more about loading and preparing audio datasets, head over to URL ### Example scripts Train your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with 'transformers' - here. Fine-tune your own Language Identification models on FLEURS with 'transformers' - here ### 1. Speech Recognition (ASR) ### 2. Language Identification LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all. ### 3. Retrieval Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult. Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech. ## Dataset Structure We show detailed information the example configurations 'af_za' of the dataset. All other configurations have the same structure. ### Data Instances af_za - Size of downloaded dataset files: 1.47 GB - Size of the generated dataset: 1 MB - Total amount of disk used: 1.47 GB An example of a data instance of the config 'af_za' looks as follows: ### Data Fields The data fields are the same among all splits. - id (int): ID of audio sample - num_samples (int): Number of float values - path (str): Path to the audio file - audio (dict): Audio object including loaded audio array, sampling rate and path ot audio - raw_transcription (str): The non-normalized transcription of the audio file - transcription (str): Transcription of the audio file - gender (int): Class id of gender - lang_id (int): Class id of language - lang_group_id (int): Class id of language group ### Data Splits Every config only has the '"train"' split containing of *ca.* 1000 examples, and a '"validation"' and '"test"' split each containing of *ca.* 400 examples. ## Dataset Creation We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for train, dev and test respectively. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos). ### Discussion of Biases Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages. ### Other Known Limitations The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding. ## Additional Information All datasets are licensed under the Creative Commons license (CC-BY). You can access the FLEURS paper at URL Please cite the paper when referencing the FLEURS corpus as: ### Contributions Thanks to @patrickvonplaten and @aconneau for adding this dataset.
[ "# FLEURS", "## Dataset Description\n\n- Fine-Tuning script: pytorch/speech-recognition\n- Paper: FLEURS: Few-shot Learning Evaluation of\nUniversal Representations of Speech\n- Total amount of disk used: ca. 350 GB\n\nFleurs is the speech version of the FLoRes machine translation benchmark. \nWe use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages. \n\nTraining sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is\nused and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas: \n\n- Western Europe: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh* \n- Eastern Europe: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*\n- Central-Asia/Middle-East/North-Africa: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*\n- Sub-Saharan Africa: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*\n- South-Asia: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*\n- South-East Asia: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*\n- CJK languages: *Cantonese and Mandarin Chinese, Japanese, Korean*", "## How to use & Supported Tasks", "### How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi_in\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).\n\nLocal:\n\n\n\nStreaming:\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL", "### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with 'transformers' - here.\n\nFine-tune your own Language Identification models on FLEURS with 'transformers' - here", "### 1. Speech Recognition (ASR)", "### 2. Language Identification\n\nLangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.", "### 3. Retrieval\n\nRetrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English \"key\" utterance corresponding to the speech translation of \"queries\" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.\n\n\n\nUsers can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.", "## Dataset Structure\n\nWe show detailed information the example configurations 'af_za' of the dataset.\nAll other configurations have the same structure.", "### Data Instances\n\naf_za\n- Size of downloaded dataset files: 1.47 GB\n- Size of the generated dataset: 1 MB\n- Total amount of disk used: 1.47 GB\n\nAn example of a data instance of the config 'af_za' looks as follows:", "### Data Fields\n\nThe data fields are the same among all splits.\n- id (int): ID of audio sample\n- num_samples (int): Number of float values\n- path (str): Path to the audio file\n- audio (dict): Audio object including loaded audio array, sampling rate and path ot audio\n- raw_transcription (str): The non-normalized transcription of the audio file\n- transcription (str): Transcription of the audio file\n- gender (int): Class id of gender\n- lang_id (int): Class id of language\n- lang_group_id (int): Class id of language group", "### Data Splits\n\nEvery config only has the '\"train\"' split containing of *ca.* 1000 examples, and a '\"validation\"' and '\"test\"' split each containing of *ca.* 400 examples.", "## Dataset Creation\n\nWe collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for\ntrain, dev and test respectively.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).", "### Discussion of Biases\n\nMost datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.", "### Other Known Limitations\n\nThe dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.", "## Additional Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).\n\n\n\nYou can access the FLEURS paper at URL\nPlease cite the paper when referencing the FLEURS corpus as:", "### Contributions\n\nThanks to @patrickvonplaten and @aconneau for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-North Azerbaijani #language-Belarusian #language-Bengali #language-Bosnian #language-Catalan #language-Cebuano #language-Mandarin Chinese #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Persian #language-Fulah #language-Finnish #language-Tagalog #language-French #language-Irish #language-Galician #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kamba (Kenya) #language-Kabuverdianu #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Central Kurdish #language-Kirghiz #language-Luxembourgish #language-Ganda #language-Lingala #language-Lao #language-Lithuanian #language-Luo (Kenya and Tanzania) #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Norwegian Bokmål #language-Nepali (individual language) #language-Dutch #language-Pedi #language-Nyanja #language-Occitan (post 1500) #language-Oromo #language-Odia #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Bulgarian #language-Sindhi #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Serbian #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkish #language-Ukrainian #language-Umbundu #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Yue Chinese #language-Zulu #license-cc-by-4.0 #speech-recognition #arxiv-2205.12446 #arxiv-2106.03193 #region-us \n", "# FLEURS", "## Dataset Description\n\n- Fine-Tuning script: pytorch/speech-recognition\n- Paper: FLEURS: Few-shot Learning Evaluation of\nUniversal Representations of Speech\n- Total amount of disk used: ca. 350 GB\n\nFleurs is the speech version of the FLoRes machine translation benchmark. \nWe use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages. \n\nTraining sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is\nused and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas: \n\n- Western Europe: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh* \n- Eastern Europe: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*\n- Central-Asia/Middle-East/North-Africa: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*\n- Sub-Saharan Africa: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*\n- South-Asia: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*\n- South-East Asia: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*\n- CJK languages: *Cantonese and Mandarin Chinese, Japanese, Korean*", "## How to use & Supported Tasks", "### How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi_in\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).\n\nLocal:\n\n\n\nStreaming:\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL", "### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with 'transformers' - here.\n\nFine-tune your own Language Identification models on FLEURS with 'transformers' - here", "### 1. Speech Recognition (ASR)", "### 2. Language Identification\n\nLangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.", "### 3. Retrieval\n\nRetrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English \"key\" utterance corresponding to the speech translation of \"queries\" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.\n\n\n\nUsers can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.", "## Dataset Structure\n\nWe show detailed information the example configurations 'af_za' of the dataset.\nAll other configurations have the same structure.", "### Data Instances\n\naf_za\n- Size of downloaded dataset files: 1.47 GB\n- Size of the generated dataset: 1 MB\n- Total amount of disk used: 1.47 GB\n\nAn example of a data instance of the config 'af_za' looks as follows:", "### Data Fields\n\nThe data fields are the same among all splits.\n- id (int): ID of audio sample\n- num_samples (int): Number of float values\n- path (str): Path to the audio file\n- audio (dict): Audio object including loaded audio array, sampling rate and path ot audio\n- raw_transcription (str): The non-normalized transcription of the audio file\n- transcription (str): Transcription of the audio file\n- gender (int): Class id of gender\n- lang_id (int): Class id of language\n- lang_group_id (int): Class id of language group", "### Data Splits\n\nEvery config only has the '\"train\"' split containing of *ca.* 1000 examples, and a '\"validation\"' and '\"test\"' split each containing of *ca.* 400 examples.", "## Dataset Creation\n\nWe collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for\ntrain, dev and test respectively.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).", "### Discussion of Biases\n\nMost datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.", "### Other Known Limitations\n\nThe dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.", "## Additional Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).\n\n\n\nYou can access the FLEURS paper at URL\nPlease cite the paper when referencing the FLEURS corpus as:", "### Contributions\n\nThanks to @patrickvonplaten and @aconneau for adding this dataset." ]
d55ad9aa644a3b7cea98e5b0dc408792eb3369e0
# Open Medieval French Source: [https://github.com/OpenMedFr/texts](https://github.com/OpenMedFr/texts)
bigscience-historical-texts/Open_Medieval_French
[ "language:fro", "region:us" ]
2022-04-19T09:49:03+00:00
{"language": ["fro"]}
2022-12-12T08:50:28+00:00
[]
[ "fro" ]
TAGS #language-Old French (842-ca. 1400) #region-us
# Open Medieval French Source: URL
[ "# Open Medieval French\n\nSource: URL" ]
[ "TAGS\n#language-Old French (842-ca. 1400) #region-us \n", "# Open Medieval French\n\nSource: URL" ]
40abf43ceb2f64198e063e981f46d58ef07c304d
# Wikinews-fr-100 Benchmark Dataset for Keyphrase Generation ## About Wikinews-fr-100 is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 100 news articles in French collected from [wikinews](https://fr.wikinews.org/wiki/Accueil). Keyphrases were annotated by readers (students in computer science) in an uncontrolled setting (that is, not limited to thesaurus entries). Details about the dataset can be found in the original paper [(Bougouin et al., 2013)][bougouin-2013]. Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. ## Content and statistics The dataset contains the following test split: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: | | Test | 100 | 306.9 | 9.64 | 95.91 | 1.40 | 0.85 | 1.84 | The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. ## References - (Bougouin et al., 2013) Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013. [TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction][bougouin-2013]. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [bougouin-2013]: https://aclanthology.org/I13-1062/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
taln-ls2n/wikinews-fr-100
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:monolingual", "size_categories:n<1K", "language:fr", "license:cc-by-4.0", "region:us" ]
2022-04-19T10:55:39+00:00
{"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["fr"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "Wikinews-fr-100"}
2022-09-23T06:38:18+00:00
[]
[ "fr" ]
TAGS #task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-n<1K #language-French #license-cc-by-4.0 #region-us
Wikinews-fr-100 Benchmark Dataset for Keyphrase Generation ========================================================== About ----- Wikinews-fr-100 is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 100 news articles in French collected from wikinews. Keyphrases were annotated by readers (students in computer science) in an uncontrolled setting (that is, not limited to thesaurus entries). Details about the dataset can be found in the original paper [(Bougouin et al., 2013)](URL). Reference (indexer-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in [(Boudin and Gallina, 2021)](URL). Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using 'spacy' ('fr\_core\_news\_sm' model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in 'nltk') is applied before reference keyphrases are matched against the source text. Details about the process can be found in 'URL'. Content and statistics ---------------------- The dataset contains the following test split: The following data fields are available : * id: unique identifier of the document. * title: title of the document. * abstract: abstract of the document. * keyphrases: list of reference keyphrases. * prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases. References ---------- * (Bougouin et al., 2013) Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013. [TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction](URL). In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing. * (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](URL). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[]
[ "TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-n<1K #language-French #license-cc-by-4.0 #region-us \n" ]
986fd2f2865cc142a9177e27e11b5585bd0c885a
# TALN-Archives Benchmark Dataset for Keyphrase Generation ## About TALN-Archives is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 1207 abstracts of scientific papers in French collected from the [TALN Archives](http://talnarchives.atala.org/). Keyphrases were annotated by authors in an uncontrolled setting (that is, not limited to thesaurus entries). English translations of title/abstract/keyphrases are also available for a subset of the documents (456 fully- and 719 partially-translated documents), allowing to experiment with cross-lingual / multilingual keyphrase generation. Details about the dataset can be found in the original paper [(Boudin, 2013)][boudin-2013]. Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. ## Content and statistics The dataset contains the following test split: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: | | Test | 1207 | 138.3 | 4.12 | 53.83 | 12.32 | 21.69 | 12.16 | The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. - **translation**: translations of title, abstract and keyphrases in English if available. ## References - (Boudin, 2013) Florian Boudin. 2013. [TALN Archives : a digital archive of French research articles in Natural Language Processing (TALN Archives : une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue) [in French]][boudin-2013]. In Proceedings of TALN 2013 (Volume 2: Short Papers), pages 507–514, Les Sables d’Olonne, France. ATALA. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [boudin-2013]: https://aclanthology.org/F13-2001/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
taln-ls2n/taln-archives
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:multilingual", "size_categories:1K<n<10K", "language:fr", "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-19T12:45:33+00:00
{"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["fr", "en"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "TALN-Archives"}
2022-09-23T06:58:07+00:00
[]
[ "fr", "en" ]
TAGS #task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-multilingual #size_categories-1K<n<10K #language-French #language-English #license-cc-by-4.0 #region-us
TALN-Archives Benchmark Dataset for Keyphrase Generation ======================================================== About ----- TALN-Archives is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 1207 abstracts of scientific papers in French collected from the TALN Archives. Keyphrases were annotated by authors in an uncontrolled setting (that is, not limited to thesaurus entries). English translations of title/abstract/keyphrases are also available for a subset of the documents (456 fully- and 719 partially-translated documents), allowing to experiment with cross-lingual / multilingual keyphrase generation. Details about the dataset can be found in the original paper [(Boudin, 2013)](URL). Reference (indexer-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in [(Boudin and Gallina, 2021)](URL). Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using 'spacy' ('fr\_core\_news\_sm' model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in 'nltk') is applied before reference keyphrases are matched against the source text. Details about the process can be found in 'URL'. Content and statistics ---------------------- The dataset contains the following test split: The following data fields are available : * id: unique identifier of the document. * title: title of the document. * abstract: abstract of the document. * keyphrases: list of reference keyphrases. * prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases. * translation: translations of title, abstract and keyphrases in English if available. References ---------- * (Boudin, 2013) Florian Boudin. 2013. [TALN Archives : a digital archive of French research articles in Natural Language Processing (TALN Archives : une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue) [in French]](URL). In Proceedings of TALN 2013 (Volume 2: Short Papers), pages 507–514, Les Sables d’Olonne, France. ATALA. * (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](URL). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[]
[ "TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-multilingual #size_categories-1K<n<10K #language-French #language-English #license-cc-by-4.0 #region-us \n" ]
803ef9f038eb29517b85712d839188e2daf7629e
# 🏴‍☠️ **_This dataset is depricated! Please see NENA Speech_** 🏴‍☠️ # Dataset Card for urmi_assyrian_voice ## Dataset Description The Urmi Assyrian Voice dataset is parsed from the research and fieldwork of [Geoffrey Khan](https://cambridge.academia.edu/GeoffreyKhan) which is made public through the [North-Eastern Neo-Aramaic Database Project](https://nena.ames.cam.ac.uk/dialects/225/audio). Annotation corrections as well as parsing was performed by Matthew Nazari. ## Dataset Summary This dataset contains labelled audio examples of the Urmi dialect of North-Eastern Neo-Aramaic. The dataset only consists of one female speaker in her late seventies. Note that you will need to normalize the utterances for machine learning tasks (clean punctuation, remove accents, etc).
mnazari/urmi-assyrian-voice
[ "task_categories:automatic-speech-recognition", "annotations_creators:Geoffrey Khan", "annotations_creators:Matthew Nazari", "language:aii", "license:cc0-1.0", "region:us" ]
2022-04-19T13:29:14+00:00
{"annotations_creators": ["Geoffrey Khan", "Matthew Nazari"], "language": "aii", "license": "cc0-1.0", "task_categories": ["automatic-speech-recognition"], "pretty_name": "Assyrian", "size_category": "n<1K"}
2023-09-22T04:31:05+00:00
[]
[ "aii" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-Geoffrey Khan #annotations_creators-Matthew Nazari #language-Assyrian Neo-Aramaic #license-cc0-1.0 #region-us
# ‍️ _This dataset is depricated! Please see NENA Speech_ ‍️ # Dataset Card for urmi_assyrian_voice ## Dataset Description The Urmi Assyrian Voice dataset is parsed from the research and fieldwork of Geoffrey Khan which is made public through the North-Eastern Neo-Aramaic Database Project. Annotation corrections as well as parsing was performed by Matthew Nazari. ## Dataset Summary This dataset contains labelled audio examples of the Urmi dialect of North-Eastern Neo-Aramaic. The dataset only consists of one female speaker in her late seventies. Note that you will need to normalize the utterances for machine learning tasks (clean punctuation, remove accents, etc).
[ "# ‍️ _This dataset is depricated! Please see NENA Speech_ ‍️", "# Dataset Card for urmi_assyrian_voice", "## Dataset Description\n\nThe Urmi Assyrian Voice dataset is parsed from the research and fieldwork of Geoffrey Khan which is made public through the North-Eastern Neo-Aramaic Database Project. Annotation corrections as well as parsing was performed by Matthew Nazari.", "## Dataset Summary\n\nThis dataset contains labelled audio examples of the Urmi dialect of North-Eastern Neo-Aramaic. The dataset only consists of one female speaker in her late seventies. Note that you will need to normalize the utterances for machine learning tasks (clean punctuation, remove accents, etc)." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-Geoffrey Khan #annotations_creators-Matthew Nazari #language-Assyrian Neo-Aramaic #license-cc0-1.0 #region-us \n", "# ‍️ _This dataset is depricated! Please see NENA Speech_ ‍️", "# Dataset Card for urmi_assyrian_voice", "## Dataset Description\n\nThe Urmi Assyrian Voice dataset is parsed from the research and fieldwork of Geoffrey Khan which is made public through the North-Eastern Neo-Aramaic Database Project. Annotation corrections as well as parsing was performed by Matthew Nazari.", "## Dataset Summary\n\nThis dataset contains labelled audio examples of the Urmi dialect of North-Eastern Neo-Aramaic. The dataset only consists of one female speaker in her late seventies. Note that you will need to normalize the utterances for machine learning tasks (clean punctuation, remove accents, etc)." ]
47054de4458827ac3fb5136f5f953ddf3deb3c53
# Dataset Card for Multi-Document ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Multi-Document repository](https://github.com/arka0821/multi_document_summarization) - **Paper:** [Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235) ### Dataset Summary Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English ## Dataset Structure ### Data Instances {"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages &amp; T-Cells to attack &amp; destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks \" Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."} ### Data Fields {'id': text of paper abstract \ 'docs': document id \ [ 'id': id of text \ 'text': text data \ ] 'summary': summary text } ### Data Splits The data is split into a training, validation and test. | train | validation | test | |------:|-----------:|-----:| | 50 | 10 | 5 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{lu2020multi, title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles}, author={Arka Das, India}, journal={arXiv preprint arXiv:2010.14235}, year={2022} } ``` ### Contributions Thanks to [@arka0821] (https://github.com/arka0821/multi_document_summarization) for adding this dataset.
arka0821/multi_document_summarization
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:2010.14235", "region:us" ]
2022-04-19T14:34:53+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["summarization-other-paper-abstract-generation"], "paperswithcode_id": "multi-document", "pretty_name": "Multi-Document"}
2022-10-20T18:13:26+00:00
[ "2010.14235" ]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2010.14235 #region-us
Dataset Card for Multi-Document =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: Multi-Document repository * Paper: Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles ### Dataset Summary Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in English Dataset Structure ----------------- ### Data Instances {"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages & T-Cells to attack & destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks " Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."} ### Data Fields {'id': text of paper abstract 'docs': document id [ 'id': id of text 'text': text data ] 'summary': summary text } ### Data Splits The data is split into a training, validation and test. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to [@arka0821] (URL for adding this dataset.
[ "### Dataset Summary\n\n\nMulti-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe text in the dataset is in English\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{\"id\": \"n3ByHGrxH3bvfrvF\", \"docs\": [{\"id\": \"1394519630182457344\", \"text\": \"Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5\"}, {\"id\": \"1398154482463170561\", \"text\": \"The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P\"}, {\"id\": \"1354844652520792071\", \"text\": \"The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \\nPT1/2\\n#COVID19vaccines #biotech https://t.co/ET1maJznot\"}, {\"id\": \"1340189698107518976\", \"text\": \"@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages & T-Cells to attack & destroy any COVID-19 v variant at infection point\"}, {\"id\": \"1374368989581778945\", \"text\": \"@DelthiaRicks \" Pfizer and BioNTech\\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\"\"}, {\"id\": \"1353354819315126273\", \"text\": \"Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C\"}, {\"id\": \"1400821856362401792\", \"text\": \"Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\\u00a0Delta variant\\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\\u00a0Lancet\\u00a0journal.\\n https://t.co/IaCMX81X3b\"}, {\"id\": \"1367252963190665219\", \"text\": \"New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\\nhttps://t.co/B4vR1KUQ0w\"}, {\"id\": \"1375949502461394946\", \"text\": \"Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv\"}, {\"id\": \"1395428608349548550\", \"text\": \"JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ\"}], \"summary\": \"The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it.\"}", "### Data Fields\n\n\n{'id': text of paper abstract \n\n'docs': document id \n\n[\n'id': id of text \n\n'text': text data \n\n]\n'summary': summary text\n}", "### Data Splits\n\n\nThe data is split into a training, validation and test.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to [@arka0821] (URL for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2010.14235 #region-us \n", "### Dataset Summary\n\n\nMulti-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe text in the dataset is in English\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{\"id\": \"n3ByHGrxH3bvfrvF\", \"docs\": [{\"id\": \"1394519630182457344\", \"text\": \"Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5\"}, {\"id\": \"1398154482463170561\", \"text\": \"The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P\"}, {\"id\": \"1354844652520792071\", \"text\": \"The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \\nPT1/2\\n#COVID19vaccines #biotech https://t.co/ET1maJznot\"}, {\"id\": \"1340189698107518976\", \"text\": \"@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages & T-Cells to attack & destroy any COVID-19 v variant at infection point\"}, {\"id\": \"1374368989581778945\", \"text\": \"@DelthiaRicks \" Pfizer and BioNTech\\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\"\"}, {\"id\": \"1353354819315126273\", \"text\": \"Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C\"}, {\"id\": \"1400821856362401792\", \"text\": \"Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\\u00a0Delta variant\\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\\u00a0Lancet\\u00a0journal.\\n https://t.co/IaCMX81X3b\"}, {\"id\": \"1367252963190665219\", \"text\": \"New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\\nhttps://t.co/B4vR1KUQ0w\"}, {\"id\": \"1375949502461394946\", \"text\": \"Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv\"}, {\"id\": \"1395428608349548550\", \"text\": \"JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ\"}], \"summary\": \"The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it.\"}", "### Data Fields\n\n\n{'id': text of paper abstract \n\n'docs': document id \n\n[\n'id': id of text \n\n'text': text data \n\n]\n'summary': summary text\n}", "### Data Splits\n\n\nThe data is split into a training, validation and test.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to [@arka0821] (URL for adding this dataset." ]
4265bc69c85c8f1d21b223a8e6cc61cdf57fda95
# Dataset Card for ID Word2Phoneme ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Github](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) - **Repository:** [Github](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) - **Point of Contact:** - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** ### Dataset Summary Originally a [Malay/Indonesian Lexicon](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) retrieved from [ipa-dict](https://github.com/open-dict-data/ipa-dict). We removed the accented letters (because Indonesian graphemes do not use accents), separated homographs, and removed backslashes in phonemes -- resulting in a word-to-phoneme dataset. ### Languages - Indonesian - Malay ## Dataset Structure ### Data Instances | word | phoneme | | ----- | ------- | | aba | aba | | ab | ab | | ab’ad | abʔad | | abad | abad | | abadi | abadi | | ... | ... | ### Data Fields - `word`: Word (grapheme) as a string. - `phoneme`: Phoneme (IPA) as a string. ### Data Splits | train | | ----- | | 27553 | ## Additional Information ### Citation Information ``` @misc{open-dict-data-no-date, author = {{Open-Dict-Data}}, title = {{GitHub - open-dict-data/ipa-dict: Monolingual wordlists with pronunciation information in IPA}}, url = {https://github.com/open-dict-data/ipa-dict}, } ```
bookbot/id_word2phoneme
[ "task_categories:text2text-generation", "annotations_creators:no-annotation", "language_creators:found", "source_datasets:original", "language:id", "language:ms", "region:us" ]
2022-04-20T06:37:29+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id", "ms"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "ID Word2Phoneme"}
2023-03-20T10:00:22+00:00
[]
[ "id", "ms" ]
TAGS #task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-found #source_datasets-original #language-Indonesian #language-Malay (macrolanguage) #region-us
Dataset Card for ID Word2Phoneme ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Additional Information + Citation Information Dataset Description ------------------- * Homepage: Github * Repository: Github * Point of Contact: * Size of downloaded dataset files: * Size of the generated dataset: * Total amount of disk used: ### Dataset Summary Originally a Malay/Indonesian Lexicon retrieved from ipa-dict. We removed the accented letters (because Indonesian graphemes do not use accents), separated homographs, and removed backslashes in phonemes -- resulting in a word-to-phoneme dataset. ### Languages * Indonesian * Malay Dataset Structure ----------------- ### Data Instances ### Data Fields * 'word': Word (grapheme) as a string. * 'phoneme': Phoneme (IPA) as a string. ### Data Splits Additional Information ----------------------
[ "### Dataset Summary\n\n\nOriginally a Malay/Indonesian Lexicon retrieved from ipa-dict. We removed the accented letters (because Indonesian graphemes do not use accents), separated homographs, and removed backslashes in phonemes -- resulting in a word-to-phoneme dataset.", "### Languages\n\n\n* Indonesian\n* Malay\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'word': Word (grapheme) as a string.\n* 'phoneme': Phoneme (IPA) as a string.", "### Data Splits\n\n\n\nAdditional Information\n----------------------" ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-found #source_datasets-original #language-Indonesian #language-Malay (macrolanguage) #region-us \n", "### Dataset Summary\n\n\nOriginally a Malay/Indonesian Lexicon retrieved from ipa-dict. We removed the accented letters (because Indonesian graphemes do not use accents), separated homographs, and removed backslashes in phonemes -- resulting in a word-to-phoneme dataset.", "### Languages\n\n\n* Indonesian\n* Malay\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'word': Word (grapheme) as a string.\n* 'phoneme': Phoneme (IPA) as a string.", "### Data Splits\n\n\n\nAdditional Information\n----------------------" ]
a064f3d34057b27eb27a12bf81165f5fad9a09f1
# Dataset Card for "CrossSum" ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/csebuetnlp/CrossSum](https://github.com/csebuetnlp/CrossSum) - **Paper:** [CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs](https://arxiv.org/abs/2112.08804) - **Point of Contact:** [Tahmid Hasan](mailto:[email protected]) ### Dataset Summary We present CrossSum, a large-scale dataset comprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs constituting 45 languages. We use the multilingual XL-Sum dataset and align identical articles written in different languages via crosslingual retrieval using a language-agnostic representation model. ### Supported Tasks and Leaderboards [More information needed](https://github.com/csebuetnlp/CrossSum) ### Languages - `amharic` - `arabic` - `azerbaijani` - `bengali` - `burmese` - `chinese_simplified` - `chinese_traditional` - `english` - `french` - `gujarati` - `hausa` - `hindi` - `igbo` - `indonesian` - `japanese` - `kirundi` - `korean` - `kyrgyz` - `marathi` - `nepali` - `oromo` - `pashto` - `persian` - `pidgin` - `portuguese` - `punjabi` - `russian` - `scottish_gaelic` - `serbian_cyrillic` - `serbian_latin` - `sinhala` - `somali` - `spanish` - `swahili` - `tamil` - `telugu` - `thai` - `tigrinya` - `turkish` - `ukrainian` - `urdu` - `uzbek` - `vietnamese` - `welsh` - `yoruba` ## Loading the dataset ```python from datasets import load_dataset # for available language names, see above src_lang = "english" tgt_lang = "bengali" ds = load_dataset(f"csebuetnlp/CrossSum", "{}-{}".format(src_lang, tgt_lang)) ``` ## Dataset Structure ### Data Instances One example from the `English` dataset is given below in JSON format. ``` { "source_url": "https://www.bbc.com/japanese/53074000", "target_url": "https://www.bbc.com/bengali/news-53064712", "summary": "বিজ্ঞানীরা বলছেন ডেক্সামেথাসোন নামে সস্তা ও সহজলভ্য একটি ওষুধ করোনাভাইরাসে গুরুতর অসুস্থ রোগীদের জীবন রক্ষা করতে সাহায্য করবে।", "text": "ミシェル・ロバーツ、BBCニュースオンライン健康担当編集長 英オックスフォード大学の研究チームによると、低用量のデキサメタゾンは新型ウイルスとの戦いで画期的な突破口になる。 新型コロナウイルスに対し、様々な既存の治療法の効果を試す世界的規模の臨床試験の一貫として、デキサメタゾンが試された。 その結果、人工呼吸器を必要とする重症患者の致死率が3割下がり、酸素供給を必要とする患者の場合は2割下がった。 新型ウイルスのパンデミック(世界的流行)の初期からイギリスでデキサメタゾンを治療に使用していた場合、最大5000人の命が救えたはずだと研究者たちは言う。 さらに、新型コロナウイルスによる感染症「COVID-19」の患者が多く出ている貧しい国にとっても、安価なデキサメタゾンを使う治療は大いに役立つと期待される。 重症者の致死率が大幅に下がる イギリス政府は20万人分の投与量を備蓄しており、国民医療制度の国民保健サービス(NHS)で患者への使用を開始する方針を示した。 ボリス・ジョンソン英首相は「イギリス科学界の素晴らしい成果」を歓迎し、「たとえ感染の第2波が来ても備蓄が足りるよう、数を確保するための措置をとった」と述べた。 イングランド首席医務官クリス・ウィッティー教授は、「COVID-19にとってこれまでで一番重要な臨床試験結果だ。手に入りやすく安全でなじみのある薬によって、酸素供給や人工呼吸器が必要な人の致死率が大幅に下がった。(中略)この発見が世界中で人命を救う」と評価した。 <関連記事> 新型コロナウイルスに20人が感染した場合、19人は入院しないまま回復する。入院する人もほとんどは回復するものの、重症化して酸素供給や人工呼吸器を必要とする人もいる。 デキサメタゾンはこうした重症患者の治療に効果があるもよう。 新型ウイルスに感染した患者の体内では、ウイルスと戦う免疫系が暴走することがある。その免疫系の過剰反応による体の損傷を、デキサメタゾンが緩和するものとみられる。 「サイトカイン・ストーム」と呼ばれる免疫系の過剰反応が、患者の命を奪うこともある。 デキサメタゾンはすでに抗炎症剤として、ぜんそくや皮膚炎など様々な症状の治療に使われている。 初めて致死率を下げる薬 オックスフォード大学が主導する臨床試験は、約2000人の入院患者にデキサメタゾンを投与。それ以外の4000人以上の患者と容体を比較した。 人工呼吸器を使用する患者については、死亡リスクが40%から28%に下がった。 酸素供給する患者は、死亡リスクが25%から20%に下がった。 研究チームのピーター・ホービー教授は、「今のところ、致死率を実際に下げる結果が出たのは、この薬だけだ。しかも、致死率をかなり下げる。画期的な突破口だ」と話した。 研究を主導するマーティン・ランドレイ教授によると、人工呼吸器を使う患者の8人に1人、ならびに酸素供給治療を受ける患者の20-25人に1人が、デキサメタゾンで救えることが分かったという。 「これはきわめて明確なメリットだ」と教授は言う。 「最大10日間、デキサメタゾンを投与するという治療法で、費用は患者1人あたり1日約5ポンド(約670円)。つまり、35ポンド(約4700円)で人ひとりの命が救える」 「しかもこれは、世界中で手に入る薬だ」 状況が許す限り、新型コロナウイルスで入院中の患者にはただちに投与を開始すべきだと、ランドレイ教授は促した。 ただし、自宅で自己治療するために薬局に買いに行くべきではないと言う。 デキサメタゾンは、呼吸補助を必要としない軽症の患者には効果がないもよう。 3月に始動した新型コロナウイルス治療薬の無作為化臨床試験「リカバリー・トライアル」は、抗マラリア薬「ヒドロキシクロロキン」も調べたものの、心臓疾患や致死率の悪化につながるという懸念から、ヒドロキシクロロキンについては試験を中止した。 一方で、感染者の回復にかかる時間を短縮するとみられるレムデシビルは、すでにNHSの保険対象になり治療現場で使われている。 <解説> ファーガス・ウォルシュBBC健康担当編集委員 COVID-19の死者を減らすと初めて立証された薬は、高価な新しい薬ではなく、古くからずっと使われてきた、きわめて安いステロイド剤だった。 世界中の患者が直ちにその恩恵を受けることになるので、これは歓迎すべき発見だ。 この臨床試験の最新成果がこれほど急いで発表されたのは、そのためだ。とてつもない影響を世界中にもたらすので。 デキサメタゾンは1960年代初めから、関節リウマチやぜんそくなど、幅広い症状の治療に使われてきた。 これまでは、人工呼吸器を必要とするCOVID-19患者の半数が亡くなってきた。その致死率を3割減らすというのは、絶大な効果だ。 集中治療室では点滴で投与する。もう少し軽症な患者には、錠剤で与える。 これまでのところ、COVID-19患者に効果があると証明された薬は、エボラ治療薬のレムデシビルだけだった。 レムデシビルは症状の回復期間を15日から11日に短縮する。しかし、致死率を下げると言えるだけの証拠は出ていなかった。 デキサメタゾンと異なり、レムデシビルは数の少ない新薬で、薬価もまだ公表されていない。" } ``` ### Data Fields - 'source_url': A string representing the source article URL. - 'target_url': A string representing the target article URL. - 'summary': A string containing the article summary. - 'text' : A string containing the article text. ### Data Splits No. of total examples for each language pair are as follows: Language (ISO 639-1-Code) | am | ar | az | bn | my | zh-CN | zh-TW | en | fr | gu | ha | hi | ig | id | ja | rn | ko | ky | mr | np | om | ps | fa | pcm | pt | pa | ru | gd | sr | sr | si | so | es | sw | ta | te | th | ti | tr | uk | ur | uz | vi | cy | yo ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- am | -- | 667 | 100 | 272 | 95 | 179 | 167 | 1456 | 358 | 173 | 221 | 377 | 26 | 494 | 264 | 423 | 244 | 92 | 221 | 301 | 21 | 192 | 431 | 209 | 307 | 189 | 347 | 0 | 357 | 365 | 62 | 309 | 351 | 378 | 390 | 329 | 124 | 131 | 435 | 345 | 409 | 41 | 285 | 1 | 67 ar | 667 | -- | 787 | 804 | 652 | 2968 | 2843 | 9653 | 989 | 475 | 747 | 3665 | 86 | 6084 | 1188 | 876 | 707 | 299 | 559 | 854 | 9 | 2161 | 4186 | 436 | 2539 | 547 | 5564 | 1 | 1109 | 1145 | 315 | 1049 | 3654 | 1186 | 1311 | 877 | 367 | 27 | 4147 | 3457 | 4935 | 388 | 2666 | 38 | 141 az | 100 | 787 | -- | 277 | 84 | 371 | 334 | 1317 | 208 | 192 | 126 | 748 | 28 | 1111 | 231 | 188 | 155 | 221 | 194 | 242 | 1 | 252 | 817 | 91 | 678 | 190 | 2238 | 4 | 289 | 283 | 124 | 367 | 704 | 539 | 515 | 245 | 140 | 2 | 1495 | 1383 | 966 | 199 | 725 | 30 | 42 bn | 272 | 804 | 277 | -- | 139 | 318 | 284 | 1549 | 317 | 559 | 231 | 1396 | 35 | 1076 | 342 | 298 | 352 | 154 | 586 | 668 | 2 | 300 | 790 | 135 | 764 | 580 | 838 | 0 | 562 | 564 | 151 | 412 | 701 | 471 | 919 | 793 | 245 | 6 | 860 | 688 | 1382 | 98 | 527 | 37 | 61 my | 95 | 652 | 84 | 139 | -- | 356 | 314 | 685 | 90 | 96 | 74 | 528 | 12 | 761 | 144 | 100 | 112 | 58 | 89 | 152 | 1 | 234 | 426 | 39 | 230 | 86 | 535 | 0 | 115 | 123 | 87 | 79 | 431 | 86 | 185 | 147 | 71 | 4 | 449 | 350 | 591 | 62 | 447 | 4 | 12 zh-CN | 179 | 2968 | 371 | 318 | 356 | -- | 47101 | 4975 | 348 | 201 | 159 | 1379 | 38 | 2851 | 1017 | 240 | 412 | 139 | 240 | 275 | 14 | 559 | 1111 | 149 | 1371 | 250 | 2572 | 2 | 504 | 530 | 166 | 323 | 2002 | 412 | 511 | 353 | 269 | 11 | 1511 | 1619 | 1651 | 176 | 1858 | 33 | 39 zh-TW | 167 | 2843 | 334 | 284 | 314 | 47101 | -- | 4884 | 331 | 174 | 150 | 1213 | 35 | 2588 | 953 | 209 | 382 | 131 | 213 | 252 | 16 | 501 | 967 | 141 | 1271 | 226 | 2286 | 1 | 453 | 494 | 150 | 302 | 1873 | 383 | 465 | 335 | 250 | 12 | 1294 | 1464 | 1444 | 158 | 1663 | 31 | 38 en | 1456 | 9653 | 1317 | 1549 | 685 | 4975 | 4884 | -- | 1889 | 978 | 913 | 4728 | 144 | 10040 | 3040 | 1878 | 1673 | 490 | 1181 | 1614 | 38 | 1522 | 4680 | 1074 | 4744 | 1330 | 9080 | 128 | 3760 | 3809 | 532 | 2141 | 6910 | 2701 | 3156 | 2121 | 1020 | 58 | 5676 | 6562 | 6320 | 450 | 4574 | 2655 | 229 fr | 358 | 989 | 208 | 317 | 90 | 348 | 331 | 1889 | -- | 242 | 477 | 616 | 106 | 1018 | 274 | 735 | 264 | 124 | 241 | 323 | 4 | 196 | 602 | 439 | 921 | 247 | 849 | 2 | 555 | 569 | 98 | 502 | 990 | 872 | 425 | 380 | 185 | 10 | 829 | 721 | 766 | 76 | 438 | 40 | 159 gu | 173 | 475 | 192 | 559 | 96 | 201 | 174 | 978 | 242 | -- | 147 | 5170 | 34 | 710 | 228 | 183 | 268 | 106 | 2091 | 561 | 1 | 246 | 522 | 101 | 529 | 2210 | 582 | 0 | 331 | 345 | 125 | 261 | 540 | 300 | 1762 | 2066 | 164 | 5 | 631 | 508 | 1619 | 80 | 450 | 21 | 54 ha | 221 | 747 | 126 | 231 | 74 | 159 | 150 | 913 | 477 | 147 | -- | 460 | 202 | 901 | 157 | 485 | 135 | 61 | 159 | 239 | 5 | 229 | 487 | 529 | 375 | 157 | 525 | 1 | 258 | 258 | 49 | 391 | 463 | 568 | 299 | 260 | 87 | 9 | 519 | 400 | 526 | 59 | 352 | 30 | 362 hi | 377 | 3665 | 748 | 1396 | 528 | 1379 | 1213 | 4728 | 616 | 5170 | 460 | -- | 65 | 5627 | 623 | 489 | 520 | 234 | 3831 | 1357 | 4 | 1519 | 5351 | 192 | 6563 | 4052 | 4622 | 1 | 809 | 807 | 449 | 747 | 2931 | 893 | 3711 | 3762 | 378 | 7 | 3694 | 3935 | 15666 | 352 | 3738 | 77 | 79 ig | 26 | 86 | 28 | 35 | 12 | 38 | 35 | 144 | 106 | 34 | 202 | 65 | -- | 113 | 24 | 107 | 32 | 16 | 51 | 36 | 3 | 11 | 49 | 255 | 61 | 39 | 79 | 0 | 51 | 51 | 13 | 77 | 91 | 151 | 52 | 54 | 18 | 5 | 91 | 83 | 61 | 15 | 65 | 6 | 296 id | 494 | 6084 | 1111 | 1076 | 761 | 2851 | 2588 | 10040 | 1018 | 710 | 901 | 5627 | 113 | -- | 1274 | 994 | 774 | 347 | 745 | 1104 | 8 | 1430 | 3892 | 367 | 4409 | 725 | 7588 | 7 | 1387 | 1379 | 470 | 1312 | 4547 | 1873 | 1886 | 1131 | 599 | 9 | 5663 | 4829 | 6476 | 432 | 4810 | 145 | 174 ja | 264 | 1188 | 231 | 342 | 144 | 1017 | 953 | 3040 | 274 | 228 | 157 | 623 | 24 | 1274 | -- | 372 | 654 | 140 | 302 | 424 | 2 | 266 | 1014 | 152 | 706 | 269 | 1517 | 2 | 550 | 571 | 109 | 387 | 950 | 425 | 641 | 425 | 305 | 5 | 1242 | 1013 | 797 | 49 | 908 | 25 | 33 rn | 423 | 876 | 188 | 298 | 100 | 240 | 209 | 1878 | 735 | 183 | 485 | 489 | 107 | 994 | 372 | -- | 283 | 106 | 242 | 369 | 18 | 228 | 684 | 398 | 526 | 206 | 711 | 0 | 443 | 450 | 77 | 584 | 607 | 1186 | 521 | 363 | 149 | 13 | 724 | 610 | 617 | 59 | 631 | 20 | 180 ko | 244 | 707 | 155 | 352 | 112 | 412 | 382 | 1673 | 264 | 268 | 135 | 520 | 32 | 774 | 654 | 283 | -- | 99 | 319 | 445 | 1 | 150 | 596 | 130 | 587 | 264 | 649 | 0 | 522 | 543 | 81 | 234 | 613 | 324 | 541 | 452 | 197 | 5 | 680 | 616 | 532 | 54 | 530 | 12 | 45 ky | 92 | 299 | 221 | 154 | 58 | 139 | 131 | 490 | 124 | 106 | 61 | 234 | 16 | 347 | 140 | 106 | 99 | -- | 107 | 167 | 4 | 102 | 252 | 59 | 251 | 118 | 1013 | 1 | 206 | 211 | 45 | 145 | 279 | 150 | 206 | 174 | 109 | 3 | 346 | 508 | 270 | 113 | 201 | 12 | 23 mr | 221 | 559 | 194 | 586 | 89 | 240 | 213 | 1181 | 241 | 2091 | 159 | 3831 | 51 | 745 | 302 | 242 | 319 | 107 | -- | 630 | 1 | 232 | 608 | 138 | 524 | 1797 | 675 | 0 | 419 | 436 | 129 | 270 | 603 | 332 | 1776 | 1886 | 196 | 11 | 706 | 596 | 1395 | 79 | 473 | 16 | 48 np | 301 | 854 | 242 | 668 | 152 | 275 | 252 | 1614 | 323 | 561 | 239 | 1357 | 36 | 1104 | 424 | 369 | 445 | 167 | 630 | -- | 1 | 303 | 916 | 134 | 706 | 545 | 849 | 2 | 553 | 538 | 164 | 420 | 687 | 513 | 994 | 741 | 217 | 7 | 930 | 741 | 1156 | 84 | 719 | 39 | 65 om | 21 | 9 | 1 | 2 | 1 | 14 | 16 | 38 | 4 | 1 | 5 | 4 | 3 | 8 | 2 | 18 | 1 | 4 | 1 | 1 | -- | 2 | 3 | 11 | 4 | 6 | 8 | 0 | 2 | 3 | 0 | 6 | 7 | 5 | 2 | 2 | 1 | 103 | 5 | 10 | 1 | 4 | 2 | 0 | 7 ps | 192 | 2161 | 252 | 300 | 234 | 559 | 501 | 1522 | 196 | 246 | 229 | 1519 | 11 | 1430 | 266 | 228 | 150 | 102 | 232 | 303 | 2 | -- | 2815 | 94 | 594 | 249 | 1246 | 0 | 235 | 242 | 156 | 304 | 766 | 314 | 441 | 314 | 92 | 8 | 1049 | 818 | 2833 | 156 | 657 | 7 | 32 fa | 431 | 4186 | 817 | 790 | 426 | 1111 | 967 | 4680 | 602 | 522 | 487 | 5351 | 49 | 3892 | 1014 | 684 | 596 | 252 | 608 | 916 | 3 | 2815 | -- | 186 | 5512 | 541 | 4328 | 0 | 1028 | 1023 | 276 | 812 | 2512 | 1002 | 1250 | 797 | 364 | 8 | 3695 | 3567 | 6752 | 313 | 3190 | 66 | 74 pcm | 209 | 436 | 91 | 135 | 39 | 149 | 141 | 1074 | 439 | 101 | 529 | 192 | 255 | 367 | 152 | 398 | 130 | 59 | 138 | 134 | 11 | 94 | 186 | -- | 227 | 112 | 322 | 0 | 234 | 246 | 28 | 219 | 314 | 436 | 232 | 162 | 85 | 28 | 287 | 280 | 232 | 18 | 170 | 9 | 462 pt | 307 | 2539 | 678 | 764 | 230 | 1371 | 1271 | 4744 | 921 | 529 | 375 | 6563 | 61 | 4409 | 706 | 526 | 587 | 251 | 524 | 706 | 4 | 594 | 5512 | 227 | -- | 579 | 4452 | 7 | 1371 | 1341 | 231 | 602 | 7112 | 983 | 1042 | 820 | 468 | 3 | 3483 | 4421 | 6759 | 186 | 3754 | 110 | 97 pa | 189 | 547 | 190 | 580 | 86 | 250 | 226 | 1330 | 247 | 2210 | 157 | 4052 | 39 | 725 | 269 | 206 | 264 | 118 | 1797 | 545 | 6 | 249 | 541 | 112 | 579 | -- | 629 | 0 | 410 | 404 | 128 | 283 | 585 | 357 | 1726 | 1892 | 200 | 10 | 643 | 570 | 1515 | 73 | 431 | 16 | 44 ru | 347 | 5564 | 2238 | 838 | 535 | 2572 | 2286 | 9080 | 849 | 582 | 525 | 4622 | 79 | 7588 | 1517 | 711 | 649 | 1013 | 675 | 849 | 8 | 1246 | 4328 | 322 | 4452 | 629 | -- | 5 | 1495 | 1460 | 373 | 1166 | 4864 | 1672 | 1628 | 892 | 595 | 7 | 6223 | 22241 | 5309 | 809 | 3963 | 134 | 125 gd | 0 | 1 | 4 | 0 | 0 | 2 | 1 | 128 | 2 | 0 | 1 | 1 | 0 | 7 | 2 | 0 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 7 | 0 | 5 | -- | 2 | 3 | 2 | 1 | 3 | 1 | 0 | 0 | 1 | 0 | 6 | 5 | 2 | 1 | 3 | 36 | 2 sr | 357 | 1109 | 289 | 562 | 115 | 504 | 453 | 3760 | 555 | 331 | 258 | 809 | 51 | 1387 | 550 | 443 | 522 | 206 | 419 | 553 | 2 | 235 | 1028 | 234 | 1371 | 410 | 1495 | 2 | -- | 9041 | 127 | 377 | 1235 | 574 | 761 | 691 | 340 | 6 | 1247 | 1512 | 1021 | 109 | 685 | 42 | 69 sr | 365 | 1145 | 283 | 564 | 123 | 530 | 494 | 3809 | 569 | 345 | 258 | 807 | 51 | 1379 | 571 | 450 | 543 | 211 | 436 | 538 | 3 | 242 | 1023 | 246 | 1341 | 404 | 1460 | 3 | 9041 | -- | 137 | 382 | 1260 | 568 | 775 | 699 | 347 | 10 | 1229 | 1498 | 1009 | 112 | 639 | 45 | 79 si | 62 | 315 | 124 | 151 | 87 | 166 | 150 | 532 | 98 | 125 | 49 | 449 | 13 | 470 | 109 | 77 | 81 | 45 | 129 | 164 | 0 | 156 | 276 | 28 | 231 | 128 | 373 | 2 | 127 | 137 | -- | 137 | 260 | 189 | 348 | 173 | 69 | 7 | 301 | 306 | 510 | 38 | 216 | 5 | 15 so | 309 | 1049 | 367 | 412 | 79 | 323 | 302 | 2141 | 502 | 261 | 391 | 747 | 77 | 1312 | 387 | 584 | 234 | 145 | 270 | 420 | 6 | 304 | 812 | 219 | 602 | 283 | 1166 | 1 | 377 | 382 | 137 | -- | 689 | 1020 | 723 | 384 | 178 | 19 | 968 | 875 | 1000 | 75 | 724 | 20 | 116 es | 351 | 3654 | 704 | 701 | 431 | 2002 | 1873 | 6910 | 990 | 540 | 463 | 2931 | 91 | 4547 | 950 | 607 | 613 | 279 | 603 | 687 | 7 | 766 | 2512 | 314 | 7112 | 585 | 4864 | 3 | 1235 | 1260 | 260 | 689 | -- | 1047 | 1073 | 827 | 469 | 10 | 3645 | 3130 | 3060 | 290 | 2330 | 59 | 133 sw | 378 | 1186 | 539 | 471 | 86 | 412 | 383 | 2701 | 872 | 300 | 568 | 893 | 151 | 1873 | 425 | 1186 | 324 | 150 | 332 | 513 | 5 | 314 | 1002 | 436 | 983 | 357 | 1672 | 1 | 574 | 568 | 189 | 1020 | 1047 | -- | 929 | 492 | 261 | 10 | 1348 | 1309 | 1253 | 90 | 936 | 37 | 219 ta | 390 | 1311 | 515 | 919 | 185 | 511 | 465 | 3156 | 425 | 1762 | 299 | 3711 | 52 | 1886 | 641 | 521 | 541 | 206 | 1776 | 994 | 2 | 441 | 1250 | 232 | 1042 | 1726 | 1628 | 0 | 761 | 775 | 348 | 723 | 1073 | 929 | -- | 2278 | 400 | 14 | 1486 | 1423 | 2404 | 134 | 1092 | 32 | 68 te | 329 | 877 | 245 | 793 | 147 | 353 | 335 | 2121 | 380 | 2066 | 260 | 3762 | 54 | 1131 | 425 | 363 | 452 | 174 | 1886 | 741 | 2 | 314 | 797 | 162 | 820 | 1892 | 892 | 0 | 691 | 699 | 173 | 384 | 827 | 492 | 2278 | -- | 306 | 11 | 893 | 832 | 1748 | 107 | 644 | 21 | 61 th | 124 | 367 | 140 | 245 | 71 | 269 | 250 | 1020 | 185 | 164 | 87 | 378 | 18 | 599 | 305 | 149 | 197 | 109 | 196 | 217 | 1 | 92 | 364 | 85 | 468 | 200 | 595 | 1 | 340 | 347 | 69 | 178 | 469 | 261 | 400 | 306 | -- | 5 | 477 | 480 | 414 | 37 | 357 | 10 | 26 ti | 131 | 27 | 2 | 6 | 4 | 11 | 12 | 58 | 10 | 5 | 9 | 7 | 5 | 9 | 5 | 13 | 5 | 3 | 11 | 7 | 103 | 8 | 8 | 28 | 3 | 10 | 7 | 0 | 6 | 10 | 7 | 19 | 10 | 10 | 14 | 11 | 5 | -- | 8 | 8 | 4 | 2 | 5 | 0 | 6 tr | 435 | 4147 | 1495 | 860 | 449 | 1511 | 1294 | 5676 | 829 | 631 | 519 | 3694 | 91 | 5663 | 1242 | 724 | 680 | 346 | 706 | 930 | 5 | 1049 | 3695 | 287 | 3483 | 643 | 6223 | 6 | 1247 | 1229 | 301 | 968 | 3645 | 1348 | 1486 | 893 | 477 | 8 | -- | 4108 | 4340 | 370 | 2981 | 126 | 130 uk | 345 | 3457 | 1383 | 688 | 350 | 1619 | 1464 | 6562 | 721 | 508 | 400 | 3935 | 83 | 4829 | 1013 | 610 | 616 | 508 | 596 | 741 | 10 | 818 | 3567 | 280 | 4421 | 570 | 22241 | 5 | 1512 | 1498 | 306 | 875 | 3130 | 1309 | 1423 | 832 | 480 | 8 | 4108 | -- | 4290 | 442 | 3017 | 108 | 89 ur | 409 | 4935 | 966 | 1382 | 591 | 1651 | 1444 | 6320 | 766 | 1619 | 526 | 15666 | 61 | 6476 | 797 | 617 | 532 | 270 | 1395 | 1156 | 1 | 2833 | 6752 | 232 | 6759 | 1515 | 5309 | 2 | 1021 | 1009 | 510 | 1000 | 3060 | 1253 | 2404 | 1748 | 414 | 4 | 4340 | 4290 | -- | 389 | 3723 | 72 | 88 uz | 41 | 388 | 199 | 98 | 62 | 176 | 158 | 450 | 76 | 80 | 59 | 352 | 15 | 432 | 49 | 59 | 54 | 113 | 79 | 84 | 4 | 156 | 313 | 18 | 186 | 73 | 809 | 1 | 109 | 112 | 38 | 75 | 290 | 90 | 134 | 107 | 37 | 2 | 370 | 442 | 389 | -- | 257 | 10 | 15 vi | 285 | 2666 | 726 | 527 | 447 | 1858 | 1663 | 4575 | 438 | 450 | 352 | 3738 | 65 | 4810 | 908 | 631 | 530 | 201 | 473 | 719 | 2 | 657 | 3190 | 170 | 3755 | 431 | 3963 | 3 | 685 | 639 | 216 | 724 | 2330 | 936 | 1092 | 644 | 357 | 5 | 2982 | 3017 | 3723 | 257 | -- | 106 | 76 cy | 1 | 38 | 30 | 37 | 4 | 33 | 31 | 2655 | 40 | 21 | 30 | 77 | 6 | 145 | 25 | 20 | 12 | 12 | 16 | 39 | 0 | 7 | 66 | 9 | 110 | 16 | 134 | 36 | 42 | 45 | 5 | 20 | 59 | 37 | 32 | 21 | 10 | 0 | 126 | 108 | 72 | 10 | 106 | -- | 8 yo | 67 | 141 | 42 | 61 | 12 | 39 | 38 | 229 | 159 | 54 | 362 | 79 | 296 | 174 | 33 | 180 | 45 | 23 | 48 | 65 | 7 | 32 | 74 | 462 | 97 | 44 | 125 | 2 | 69 | 79 | 15 | 116 | 133 | 219 | 68 | 61 | 26 | 6 | 130 | 89 | 88 | 15 | 76 | 8 | -- ## Dataset Creation ### Curation Rationale [More information needed](https://github.com/csebuetnlp/CrossSum) ### Source Data [BBC News](https://www.bbc.co.uk/ws/languages) #### Initial Data Collection and Normalization [Detailed in the paper](https://arxiv.org/abs/2112.08804/) #### Who are the source language producers? [Detailed in the paper](https://arxiv.org/abs/2112.08804/) ### Annotations [Detailed in the paper](https://arxiv.org/abs/2112.08804/) #### Annotation process [Detailed in the paper](https://arxiv.org/abs/2112.08804/) #### Who are the annotators? [Detailed in the paper](https://arxiv.org/abs/2112.08804/) ### Personal and Sensitive Information [More information needed](https://github.com/csebuetnlp/CrossSum) ## Considerations for Using the Data ### Social Impact of Dataset [More information needed](https://github.com/csebuetnlp/CrossSum) ### Discussion of Biases [More information needed](https://github.com/csebuetnlp/CrossSum) ### Other Known Limitations [More information needed](https://github.com/csebuetnlp/CrossSum) ## Additional Information ### Dataset Curators [More information needed](https://github.com/csebuetnlp/CrossSum) ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ``` ### Contributions Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
csebuetnlp/CrossSum
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:am", "language:ar", "language:az", "language:bn", "language:my", "language:zh", "language:en", "language:fr", "language:gu", "language:ha", "language:hi", "language:ig", "language:id", "language:ja", "language:rn", "language:ko", "language:ky", "language:mr", "language:ne", "language:om", "language:ps", "language:fa", "language:pcm", "language:pt", "language:pa", "language:ru", "language:gd", "language:sr", "language:si", "language:so", "language:es", "language:sw", "language:ta", "language:te", "language:th", "language:ti", "language:tr", "language:uk", "language:ur", "language:uz", "language:vi", "language:cy", "language:yo", "license:cc-by-nc-sa-4.0", "arxiv:2112.08804", "region:us" ]
2022-04-20T07:27:10+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "pretty_name": "CrossSum"}
2023-12-16T13:27:47+00:00
[ "2112.08804" ]
[ "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Amharic #language-Arabic #language-Azerbaijani #language-Bengali #language-Burmese #language-Chinese #language-English #language-French #language-Gujarati #language-Hausa #language-Hindi #language-Igbo #language-Indonesian #language-Japanese #language-Rundi #language-Korean #language-Kirghiz #language-Marathi #language-Nepali (macrolanguage) #language-Oromo #language-Pushto #language-Persian #language-Nigerian Pidgin #language-Portuguese #language-Panjabi #language-Russian #language-Scottish Gaelic #language-Serbian #language-Sinhala #language-Somali #language-Spanish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tigrinya #language-Turkish #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Welsh #language-Yoruba #license-cc-by-nc-sa-4.0 #arxiv-2112.08804 #region-us
Dataset Card for "CrossSum" =========================== Table of Contents ----------------- * Dataset Card Creation Guide + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Repository: URL * Paper: CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs * Point of Contact: Tahmid Hasan ### Dataset Summary We present CrossSum, a large-scale dataset comprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs constituting 45 languages. We use the multilingual XL-Sum dataset and align identical articles written in different languages via crosslingual retrieval using a language-agnostic representation model. ### Supported Tasks and Leaderboards More information needed ### Languages * 'amharic' * 'arabic' * 'azerbaijani' * 'bengali' * 'burmese' * 'chinese\_simplified' * 'chinese\_traditional' * 'english' * 'french' * 'gujarati' * 'hausa' * 'hindi' * 'igbo' * 'indonesian' * 'japanese' * 'kirundi' * 'korean' * 'kyrgyz' * 'marathi' * 'nepali' * 'oromo' * 'pashto' * 'persian' * 'pidgin' * 'portuguese' * 'punjabi' * 'russian' * 'scottish\_gaelic' * 'serbian\_cyrillic' * 'serbian\_latin' * 'sinhala' * 'somali' * 'spanish' * 'swahili' * 'tamil' * 'telugu' * 'thai' * 'tigrinya' * 'turkish' * 'ukrainian' * 'urdu' * 'uzbek' * 'vietnamese' * 'welsh' * 'yoruba' Loading the dataset ------------------- Dataset Structure ----------------- ### Data Instances One example from the 'English' dataset is given below in JSON format. ### Data Fields * 'source\_url': A string representing the source article URL. * 'target\_url': A string representing the target article URL. * 'summary': A string containing the article summary. * 'text' : A string containing the article text. ### Data Splits No. of total examples for each language pair are as follows: Dataset Creation ---------------- ### Curation Rationale More information needed ### Source Data BBC News #### Initial Data Collection and Normalization Detailed in the paper #### Who are the source language producers? Detailed in the paper ### Annotations Detailed in the paper #### Annotation process Detailed in the paper #### Who are the annotators? Detailed in the paper ### Personal and Sensitive Information More information needed Considerations for Using the Data --------------------------------- ### Social Impact of Dataset More information needed ### Discussion of Biases More information needed ### Other Known Limitations More information needed Additional Information ---------------------- ### Dataset Curators More information needed ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders. If you use any of the datasets, models or code modules, please cite the following paper: ### Contributions Thanks to @abhik1505040 and @Tahmid for adding this dataset.
[ "### Dataset Summary\n\n\nWe present CrossSum, a large-scale dataset\ncomprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs\nconstituting 45 languages. We use the multilingual XL-Sum dataset and align identical\narticles written in different languages via crosslingual retrieval using a language-agnostic\nrepresentation model.", "### Supported Tasks and Leaderboards\n\n\nMore information needed", "### Languages\n\n\n* 'amharic'\n* 'arabic'\n* 'azerbaijani'\n* 'bengali'\n* 'burmese'\n* 'chinese\\_simplified'\n* 'chinese\\_traditional'\n* 'english'\n* 'french'\n* 'gujarati'\n* 'hausa'\n* 'hindi'\n* 'igbo'\n* 'indonesian'\n* 'japanese'\n* 'kirundi'\n* 'korean'\n* 'kyrgyz'\n* 'marathi'\n* 'nepali'\n* 'oromo'\n* 'pashto'\n* 'persian'\n* 'pidgin'\n* 'portuguese'\n* 'punjabi'\n* 'russian'\n* 'scottish\\_gaelic'\n* 'serbian\\_cyrillic'\n* 'serbian\\_latin'\n* 'sinhala'\n* 'somali'\n* 'spanish'\n* 'swahili'\n* 'tamil'\n* 'telugu'\n* 'thai'\n* 'tigrinya'\n* 'turkish'\n* 'ukrainian'\n* 'urdu'\n* 'uzbek'\n* 'vietnamese'\n* 'welsh'\n* 'yoruba'\n\n\nLoading the dataset\n-------------------\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nOne example from the 'English' dataset is given below in JSON format.", "### Data Fields\n\n\n* 'source\\_url': A string representing the source article URL.\n* 'target\\_url': A string representing the target article URL.\n* 'summary': A string containing the article summary.\n* 'text' : A string containing the article text.", "### Data Splits\n\n\nNo. of total examples for each language pair are as follows:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nMore information needed", "### Source Data\n\n\nBBC News", "#### Initial Data Collection and Normalization\n\n\nDetailed in the paper", "#### Who are the source language producers?\n\n\nDetailed in the paper", "### Annotations\n\n\nDetailed in the paper", "#### Annotation process\n\n\nDetailed in the paper", "#### Who are the annotators?\n\n\nDetailed in the paper", "### Personal and Sensitive Information\n\n\nMore information needed\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nMore information needed", "### Discussion of Biases\n\n\nMore information needed", "### Other Known Limitations\n\n\nMore information needed\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nMore information needed", "### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:", "### Contributions\n\n\nThanks to @abhik1505040 and @Tahmid for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Amharic #language-Arabic #language-Azerbaijani #language-Bengali #language-Burmese #language-Chinese #language-English #language-French #language-Gujarati #language-Hausa #language-Hindi #language-Igbo #language-Indonesian #language-Japanese #language-Rundi #language-Korean #language-Kirghiz #language-Marathi #language-Nepali (macrolanguage) #language-Oromo #language-Pushto #language-Persian #language-Nigerian Pidgin #language-Portuguese #language-Panjabi #language-Russian #language-Scottish Gaelic #language-Serbian #language-Sinhala #language-Somali #language-Spanish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tigrinya #language-Turkish #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Welsh #language-Yoruba #license-cc-by-nc-sa-4.0 #arxiv-2112.08804 #region-us \n", "### Dataset Summary\n\n\nWe present CrossSum, a large-scale dataset\ncomprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs\nconstituting 45 languages. We use the multilingual XL-Sum dataset and align identical\narticles written in different languages via crosslingual retrieval using a language-agnostic\nrepresentation model.", "### Supported Tasks and Leaderboards\n\n\nMore information needed", "### Languages\n\n\n* 'amharic'\n* 'arabic'\n* 'azerbaijani'\n* 'bengali'\n* 'burmese'\n* 'chinese\\_simplified'\n* 'chinese\\_traditional'\n* 'english'\n* 'french'\n* 'gujarati'\n* 'hausa'\n* 'hindi'\n* 'igbo'\n* 'indonesian'\n* 'japanese'\n* 'kirundi'\n* 'korean'\n* 'kyrgyz'\n* 'marathi'\n* 'nepali'\n* 'oromo'\n* 'pashto'\n* 'persian'\n* 'pidgin'\n* 'portuguese'\n* 'punjabi'\n* 'russian'\n* 'scottish\\_gaelic'\n* 'serbian\\_cyrillic'\n* 'serbian\\_latin'\n* 'sinhala'\n* 'somali'\n* 'spanish'\n* 'swahili'\n* 'tamil'\n* 'telugu'\n* 'thai'\n* 'tigrinya'\n* 'turkish'\n* 'ukrainian'\n* 'urdu'\n* 'uzbek'\n* 'vietnamese'\n* 'welsh'\n* 'yoruba'\n\n\nLoading the dataset\n-------------------\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nOne example from the 'English' dataset is given below in JSON format.", "### Data Fields\n\n\n* 'source\\_url': A string representing the source article URL.\n* 'target\\_url': A string representing the target article URL.\n* 'summary': A string containing the article summary.\n* 'text' : A string containing the article text.", "### Data Splits\n\n\nNo. of total examples for each language pair are as follows:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nMore information needed", "### Source Data\n\n\nBBC News", "#### Initial Data Collection and Normalization\n\n\nDetailed in the paper", "#### Who are the source language producers?\n\n\nDetailed in the paper", "### Annotations\n\n\nDetailed in the paper", "#### Annotation process\n\n\nDetailed in the paper", "#### Who are the annotators?\n\n\nDetailed in the paper", "### Personal and Sensitive Information\n\n\nMore information needed\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nMore information needed", "### Discussion of Biases\n\n\nMore information needed", "### Other Known Limitations\n\n\nMore information needed\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nMore information needed", "### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:", "### Contributions\n\n\nThanks to @abhik1505040 and @Tahmid for adding this dataset." ]
1f93eb3df343353f9b0d0f9bc724ab9473643bfe
### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence.
mwong/climatetext-claim-related-evaluation
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-20T11:00:50+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]}
2022-10-25T09:08:44+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence.
[ "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence." ]
72cac22487c265b0b27b424f561f0f3659c5746d
### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim.
mwong/climatetext-evidence-related-evaluation
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-20T11:18:14+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]}
2022-10-25T09:08:46+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim.
[ "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim." ]
61c95318fd71c55b6ba355d76253254615f387ec
# Dataset Card for WANLI ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [WANLI homepage](https://wanli.allenai.org/) - **Repository:** [Github repo](https://github.com/alisawuffles/wanli) - **Paper:** [arXiv](https://arxiv.org/abs/2201.05955) - **Point of Contact:** [Alisa Liu](mailto:[email protected]) ### Dataset Summary WANLI (**W**orker-**A**I Collaboration for **NLI**) is a collection of 108K English sentence pairs for the task of natural language inference (NLI). Each example is created by first identifying a "pocket" of examples in [MultiNLI (Williams et al., 2018)](https://cims.nyu.edu/~sbowman/multinli/) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern. The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators. WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI. ### Supported Tasks and Leaderboards The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%. Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision. ### Languages The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States. ## Dataset Structure ### Data Instances Here is an example of an NLI example in `data/wanli/train.jsonl` or `data/wanli/test.jsonl`. ``` { "id": 225295, "premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.", "hypothesis": "The coach is a good coach.", "gold": "entailment", "genre": "generated", "pairID": "171408" } ``` - `id`: unique identifier for the example - `premise`: a piece of text - `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise - `gold`: one of `entailment`, `neutral`, and `contradiction` - `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators - `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl` We also release the raw annotations for each worker, which can be found in `data/wanli/anonymized_annotations.jsonl`. ``` "WorkerId": "EUJ", "id": 271560, "nearest_neighbors": [ 309783, 202988, 145310, 98030, 148759 ], "premise": "I don't know what I'd do without my cat. He is my only friend.", "hypothesis": "I would be alone.", "label": "neutral", "revised_premise": "I don't know what I'd do without my cat. He is my only friend.", "revised_hypothesis": "I would be alone without my cat.", "gold": "entailment", "revised": true ``` - `WorkerId`: a unique identification for each crowdworker (NOT the real worker ID from AMT) - `id`: id of generated example - `nearest_neighbors`: ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in `mnli/train.jsonl`. - `premise`: GPT-3 generated premise - `hypothesis`: GPT-3 generated hypothesis - `label`: the shared label of the in-context examples, which is the "intended" label for this generation - `revised_premise`: premise after human review - `revised_hypothesis`: hypothesis after human review - `gold`: annotator-assigned gold label for the (potentially revised) example - `revised`: whether the example was revised ### Data Splits The dataset is randomly split into a *train* and *test* set. | | train | test | |-------------------------|------:|-----:| | Examples | 102885| 5000| ## Dataset Creation ### Curation Rationale A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label. ### Source Data #### Initial Data Collection and Normalization Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from [Swayamdipta et al. (2020)](https://aclanthology.org/2020.emnlp-main.746/) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality. #### Who are the source language producers? The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk. Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target. ### Annotations #### Annotation process Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it. Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper. Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example. For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement. #### Who are the annotators? Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States. 300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62. ### Personal and Sensitive Information The dataset does not contain any personal information about the authors or the crowdworkers. ## Considerations for Using the Data ### Social Impact of Dataset This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems. ### Discussion of Biases Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language. To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive. Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset. ## Additional Information ### Dataset Curators WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/). ### Citation Information ``` @misc{liu-etal-2022-wanli, title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation", author = "Liu, Alisa and Swayamdipta, Swabha and Smith, Noah A. and Choi, Yejin", month = jan, year = "2022", url = "https://arxiv.org/pdf/2201.05955", } ```
alisawuffles/WANLI
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2201.05955", "region:us" ]
2022-04-20T23:57:25+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "WANLI"}
2022-11-21T17:31:56+00:00
[ "2201.05955" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2201.05955 #region-us
Dataset Card for WANLI ====================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases * Additional Information + Dataset Curators + Citation Information Dataset Description ------------------- * Homepage: WANLI homepage * Repository: Github repo * Paper: arXiv * Point of Contact: Alisa Liu ### Dataset Summary WANLI (Worker-AI Collaboration for NLI) is a collection of 108K English sentence pairs for the task of natural language inference (NLI). Each example is created by first identifying a "pocket" of examples in MultiNLI (Williams et al., 2018) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern. The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators. WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI. ### Supported Tasks and Leaderboards The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%. Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision. ### Languages The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States. Dataset Structure ----------------- ### Data Instances Here is an example of an NLI example in 'data/wanli/URL' or 'data/wanli/URL'. * 'id': unique identifier for the example * 'premise': a piece of text * 'hypothesis': a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise * 'gold': one of 'entailment', 'neutral', and 'contradiction' * 'genre': one of 'generated' and 'generated\_revised', depending on whether the example was revised by annotators * 'pairID': id of seed MNLI example, corresponding to those in 'data/mnli/URL' We also release the raw annotations for each worker, which can be found in 'data/wanli/anonymized\_annotations.jsonl'. * 'WorkerId': a unique identification for each crowdworker (NOT the real worker ID from AMT) * 'id': id of generated example * 'nearest\_neighbors': ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in 'mnli/URL'. * 'premise': GPT-3 generated premise * 'hypothesis': GPT-3 generated hypothesis * 'label': the shared label of the in-context examples, which is the "intended" label for this generation * 'revised\_premise': premise after human review * 'revised\_hypothesis': hypothesis after human review * 'gold': annotator-assigned gold label for the (potentially revised) example * 'revised': whether the example was revised ### Data Splits The dataset is randomly split into a *train* and *test* set. Dataset Creation ---------------- ### Curation Rationale A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label. ### Source Data #### Initial Data Collection and Normalization Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from Swayamdipta et al. (2020) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality. #### Who are the source language producers? The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk. Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target. ### Annotations #### Annotation process Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it. Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper. Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example. For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement. #### Who are the annotators? Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States. 300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62. ### Personal and Sensitive Information The dataset does not contain any personal information about the authors or the crowdworkers. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems. ### Discussion of Biases Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language. To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive. Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset. Additional Information ---------------------- ### Dataset Curators WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the University of Washington and AI2.
[ "### Dataset Summary\n\n\nWANLI (Worker-AI Collaboration for NLI) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).\nEach example is created by first identifying a \"pocket\" of examples in MultiNLI (Williams et al., 2018) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.\nThe set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators.\n\n\nWANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI.", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%.\n\n\nModels trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision.", "### Languages\n\n\nThe dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example of an NLI example in 'data/wanli/URL' or 'data/wanli/URL'.\n\n\n* 'id': unique identifier for the example\n* 'premise': a piece of text\n* 'hypothesis': a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise\n* 'gold': one of 'entailment', 'neutral', and 'contradiction'\n* 'genre': one of 'generated' and 'generated\\_revised', depending on whether the example was revised by annotators\n* 'pairID': id of seed MNLI example, corresponding to those in 'data/mnli/URL'\n\n\nWe also release the raw annotations for each worker, which can be found in 'data/wanli/anonymized\\_annotations.jsonl'.\n\n\n* 'WorkerId': a unique identification for each crowdworker (NOT the real worker ID from AMT)\n* 'id': id of generated example\n* 'nearest\\_neighbors': ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in 'mnli/URL'.\n* 'premise': GPT-3 generated premise\n* 'hypothesis': GPT-3 generated hypothesis\n* 'label': the shared label of the in-context examples, which is the \"intended\" label for this generation\n* 'revised\\_premise': premise after human review\n* 'revised\\_hypothesis': hypothesis after human review\n* 'gold': annotator-assigned gold label for the (potentially revised) example\n* 'revised': whether the example was revised", "### Data Splits\n\n\nThe dataset is randomly split into a *train* and *test* set.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nA recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nOur pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from Swayamdipta et al. (2020) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.", "#### Who are the source language producers?\n\n\nThe GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk.\nWorkers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.", "### Annotations", "#### Annotation process\n\n\nGiven an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it.\nDetails about instructions, guidelines, and instructional examples can be found in Appendix D of the paper.\n\n\nCrowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example.\nFor examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement.", "#### Who are the annotators?\n\n\nAnnotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States.\n\n\n300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.", "### Personal and Sensitive Information\n\n\nThe dataset does not contain any personal information about the authors or the crowdworkers.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThis dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems.", "### Discussion of Biases\n\n\nText generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language.\nTo partially remedy this, we ask annotators to discard any examples that may be perceived as offensive.\nNonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nWANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the University of Washington and AI2." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2201.05955 #region-us \n", "### Dataset Summary\n\n\nWANLI (Worker-AI Collaboration for NLI) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).\nEach example is created by first identifying a \"pocket\" of examples in MultiNLI (Williams et al., 2018) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.\nThe set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators.\n\n\nWANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI.", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%.\n\n\nModels trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision.", "### Languages\n\n\nThe dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example of an NLI example in 'data/wanli/URL' or 'data/wanli/URL'.\n\n\n* 'id': unique identifier for the example\n* 'premise': a piece of text\n* 'hypothesis': a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise\n* 'gold': one of 'entailment', 'neutral', and 'contradiction'\n* 'genre': one of 'generated' and 'generated\\_revised', depending on whether the example was revised by annotators\n* 'pairID': id of seed MNLI example, corresponding to those in 'data/mnli/URL'\n\n\nWe also release the raw annotations for each worker, which can be found in 'data/wanli/anonymized\\_annotations.jsonl'.\n\n\n* 'WorkerId': a unique identification for each crowdworker (NOT the real worker ID from AMT)\n* 'id': id of generated example\n* 'nearest\\_neighbors': ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in 'mnli/URL'.\n* 'premise': GPT-3 generated premise\n* 'hypothesis': GPT-3 generated hypothesis\n* 'label': the shared label of the in-context examples, which is the \"intended\" label for this generation\n* 'revised\\_premise': premise after human review\n* 'revised\\_hypothesis': hypothesis after human review\n* 'gold': annotator-assigned gold label for the (potentially revised) example\n* 'revised': whether the example was revised", "### Data Splits\n\n\nThe dataset is randomly split into a *train* and *test* set.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nA recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nOur pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from Swayamdipta et al. (2020) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.", "#### Who are the source language producers?\n\n\nThe GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk.\nWorkers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.", "### Annotations", "#### Annotation process\n\n\nGiven an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it.\nDetails about instructions, guidelines, and instructional examples can be found in Appendix D of the paper.\n\n\nCrowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example.\nFor examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement.", "#### Who are the annotators?\n\n\nAnnotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States.\n\n\n300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.", "### Personal and Sensitive Information\n\n\nThe dataset does not contain any personal information about the authors or the crowdworkers.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThis dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems.", "### Discussion of Biases\n\n\nText generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language.\nTo partially remedy this, we ask annotators to discard any examples that may be perceived as offensive.\nNonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nWANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the University of Washington and AI2." ]
7a13ba87386bd8c9083ff858944a5f516e43f939
# Dataset Card for Corpus of Diverse Styles ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) ## Disclaimer I am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience. ## Dataset Description - **Homepage:** http://style.cs.umass.edu/ - **Repository:** https://github.com/martiansideofthemoon/style-transfer-paraphrase - **Paper:** https://arxiv.org/abs/2010.05700 ### Dataset Summary A new benchmark dataset that contains 15M sentences from 11 diverse styles. To create CDS, we obtain data from existing academic research datasets and public APIs or online collections like Project Gutenberg. We choose styles that are easy for human readers to identify at a sentence level (e.g., Tweets or Biblical text). While prior benchmarks involve a transfer between two styles, CDS has 110 potential transfer directions. ### Citation Information ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```
billray110/corpus-of-diverse-styles
[ "task_categories:text-classification", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "arxiv:2010.05700", "region:us" ]
2022-04-21T00:13:59+00:00
{"annotations_creators": [], "language_creators": ["found"], "language": [], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Corpus of Diverse Styles"}
2022-10-21T23:52:53+00:00
[ "2010.05700" ]
[]
TAGS #task_categories-text-classification #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #arxiv-2010.05700 #region-us
# Dataset Card for Corpus of Diverse Styles ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary ## Disclaimer I am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience. ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL ### Dataset Summary A new benchmark dataset that contains 15M sentences from 11 diverse styles. To create CDS, we obtain data from existing academic research datasets and public APIs or online collections like Project Gutenberg. We choose styles that are easy for human readers to identify at a sentence level (e.g., Tweets or Biblical text). While prior benchmarks involve a transfer between two styles, CDS has 110 potential transfer directions.
[ "# Dataset Card for Corpus of Diverse Styles", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary", "## Disclaimer\nI am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience.", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\nA new benchmark dataset that contains 15M\nsentences from 11 diverse styles.\n\nTo create CDS, we obtain data from existing academic\nresearch datasets and public APIs or online collections\nlike Project Gutenberg. We choose\nstyles that are easy for human readers to identify at\na sentence level (e.g., Tweets or Biblical text). While\nprior benchmarks involve a transfer between two\nstyles, CDS has 110 potential transfer directions." ]
[ "TAGS\n#task_categories-text-classification #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #arxiv-2010.05700 #region-us \n", "# Dataset Card for Corpus of Diverse Styles", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary", "## Disclaimer\nI am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience.", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\nA new benchmark dataset that contains 15M\nsentences from 11 diverse styles.\n\nTo create CDS, we obtain data from existing academic\nresearch datasets and public APIs or online collections\nlike Project Gutenberg. We choose\nstyles that are easy for human readers to identify at\na sentence level (e.g., Tweets or Biblical text). While\nprior benchmarks involve a transfer between two\nstyles, CDS has 110 potential transfer directions." ]
eb798af6f91a5305eb0f18aeb15378cc3c91b421
The dataset is a mix of topics from 3 forums: "Hotline", "Kids Psychology and Development", "Everything Else". It contains topic name (Topic), start post (message) and post unique id (Message_Id).
Kateryna/eva_ru_forum_headlines
[ "region:us" ]
2022-04-21T01:05:25+00:00
{}
2022-04-21T01:17:55+00:00
[]
[]
TAGS #region-us
The dataset is a mix of topics from 3 forums: "Hotline", "Kids Psychology and Development", "Everything Else". It contains topic name (Topic), start post (message) and post unique id (Message_Id).
[]
[ "TAGS\n#region-us \n" ]
ef485238c1494962da9f8896bfacbcf3a0747c73
## Dataset overview This is a dataset that contains restaurant reviews gathered in 2019 using a webscraping tool in Python. Reviews on restaurant visits and restaurant features were collected for Dutch restaurants. The dataset is formatted using the 🤗[DatasetDict](https://huggingface.co/docs/datasets/index) format and contains the following indices: - train, 116693 records - test, 14587 records - validation, 14587 records The dataset holds both information of the restaurant level as well as the review level and contains the following features: - [restaurant_ID] > unique restaurant ID - [restaurant_review_ID] > unique review ID - [michelin_label] > indicator whether this restaurant was awarded one (or more) Michelin stars prior to 2020 - [score_total] > restaurant level total score - [score_food] > restaurant level food score - [score_service] > restaurant level service score - [score_decor] > restaurant level decor score - [fame_reviewer] > label for how often a reviewer has posted a restaurant review - [reviewscore_food] > review level food score - [reviewscore_service] > review level service score - [reviewscore_ambiance] > review level ambiance score - [reviewscore_waiting] > review level waiting score - [reviewscore_value] > review level value for money score - [reviewscore_noise] > review level noise score - [review_text] > the full review that was written by the reviewer for this restaurant - [review_length] > total length of the review (tokens) ## Purpose The restaurant reviews submitted by visitor can be used to model the restaurant scores (food, ambiance etc) or used to model Michelin star holders. In [this blog series](https://medium.com/broadhorizon-cmotions/natural-language-processing-for-predictive-purposes-with-r-cb65f009c12b) we used the review texts to predict next Michelin star restaurants, using R.
cmotions/NL_restaurant_reviews
[ "language:nl", "text-classification", "sentiment-analysis", "region:us" ]
2022-04-21T08:48:54+00:00
{"language": ["nl"], "tags": ["text-classification", "sentiment-analysis"], "datasets": ["train", "test", "validation"]}
2022-04-21T10:20:02+00:00
[]
[ "nl" ]
TAGS #language-Dutch #text-classification #sentiment-analysis #region-us
## Dataset overview This is a dataset that contains restaurant reviews gathered in 2019 using a webscraping tool in Python. Reviews on restaurant visits and restaurant features were collected for Dutch restaurants. The dataset is formatted using the DatasetDict format and contains the following indices: - train, 116693 records - test, 14587 records - validation, 14587 records The dataset holds both information of the restaurant level as well as the review level and contains the following features: - [restaurant_ID] > unique restaurant ID - [restaurant_review_ID] > unique review ID - [michelin_label] > indicator whether this restaurant was awarded one (or more) Michelin stars prior to 2020 - [score_total] > restaurant level total score - [score_food] > restaurant level food score - [score_service] > restaurant level service score - [score_decor] > restaurant level decor score - [fame_reviewer] > label for how often a reviewer has posted a restaurant review - [reviewscore_food] > review level food score - [reviewscore_service] > review level service score - [reviewscore_ambiance] > review level ambiance score - [reviewscore_waiting] > review level waiting score - [reviewscore_value] > review level value for money score - [reviewscore_noise] > review level noise score - [review_text] > the full review that was written by the reviewer for this restaurant - [review_length] > total length of the review (tokens) ## Purpose The restaurant reviews submitted by visitor can be used to model the restaurant scores (food, ambiance etc) or used to model Michelin star holders. In this blog series we used the review texts to predict next Michelin star restaurants, using R.
[ "## Dataset overview\nThis is a dataset that contains restaurant reviews gathered in 2019 using a webscraping tool in Python. Reviews on restaurant visits and restaurant features were collected for Dutch restaurants. \nThe dataset is formatted using the DatasetDict format and contains the following indices:\n- train, 116693 records\n- test, 14587 records\n- validation, 14587 records\n\nThe dataset holds both information of the restaurant level as well as the review level and contains the following features:\n- [restaurant_ID] > unique restaurant ID\n- [restaurant_review_ID] > unique review ID\n- [michelin_label] > indicator whether this restaurant was awarded one (or more) Michelin stars prior to 2020\n- [score_total] > restaurant level total score\n- [score_food] > restaurant level food score\n- [score_service] > restaurant level service score\n- [score_decor] > restaurant level decor score\n- [fame_reviewer] > label for how often a reviewer has posted a restaurant review\n- [reviewscore_food] > review level food score\n- [reviewscore_service] > review level service score\n- [reviewscore_ambiance] > review level ambiance score\n- [reviewscore_waiting] > review level waiting score\n- [reviewscore_value] > review level value for money score\n- [reviewscore_noise] > review level noise score\n- [review_text] > the full review that was written by the reviewer for this restaurant\n- [review_length] > total length of the review (tokens)", "## Purpose\nThe restaurant reviews submitted by visitor can be used to model the restaurant scores (food, ambiance etc) or used to model Michelin star holders. In this blog series we used the review texts to predict next Michelin star restaurants, using R." ]
[ "TAGS\n#language-Dutch #text-classification #sentiment-analysis #region-us \n", "## Dataset overview\nThis is a dataset that contains restaurant reviews gathered in 2019 using a webscraping tool in Python. Reviews on restaurant visits and restaurant features were collected for Dutch restaurants. \nThe dataset is formatted using the DatasetDict format and contains the following indices:\n- train, 116693 records\n- test, 14587 records\n- validation, 14587 records\n\nThe dataset holds both information of the restaurant level as well as the review level and contains the following features:\n- [restaurant_ID] > unique restaurant ID\n- [restaurant_review_ID] > unique review ID\n- [michelin_label] > indicator whether this restaurant was awarded one (or more) Michelin stars prior to 2020\n- [score_total] > restaurant level total score\n- [score_food] > restaurant level food score\n- [score_service] > restaurant level service score\n- [score_decor] > restaurant level decor score\n- [fame_reviewer] > label for how often a reviewer has posted a restaurant review\n- [reviewscore_food] > review level food score\n- [reviewscore_service] > review level service score\n- [reviewscore_ambiance] > review level ambiance score\n- [reviewscore_waiting] > review level waiting score\n- [reviewscore_value] > review level value for money score\n- [reviewscore_noise] > review level noise score\n- [review_text] > the full review that was written by the reviewer for this restaurant\n- [review_length] > total length of the review (tokens)", "## Purpose\nThe restaurant reviews submitted by visitor can be used to model the restaurant scores (food, ambiance etc) or used to model Michelin star holders. In this blog series we used the review texts to predict next Michelin star restaurants, using R." ]
ec205ab74f5244e1cf50c06c200832cd50493546
# Dataset Card for [FrozenLake-v1] with slippery = False
AntoineLB/FrozenLakeNotFrozen
[ "region:us" ]
2022-04-21T08:53:07+00:00
{}
2022-04-26T06:40:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for [FrozenLake-v1] with slippery = False
[ "# Dataset Card for [FrozenLake-v1] with slippery = False" ]
[ "TAGS\n#region-us \n", "# Dataset Card for [FrozenLake-v1] with slippery = False" ]
d96c3ca050b694c3150bb53e6c6431f2144ce15a
### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence.
mwong/climatetext-climate_evidence-claim-related-evaluation
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-21T08:55:30+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]}
2022-10-25T09:08:48+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence.
[ "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence." ]
54b4fc98b56081e4ed5bfe6f76f68c8f52d4fc98
### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim.
mwong/climatetext-claim-climate_evidence-related-evaluation
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-21T09:07:08+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]}
2022-10-25T09:08:50+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim.
[ "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim." ]
4f0fab91e806940ab0e95f573193eb79f5052c70
### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related.
mwong/climatetext-evidence-claim-pair-related-evaluation
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-21T09:16:15+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]}
2022-10-25T09:08:53+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related.
[ "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related." ]
0961ace6703a76cb598eb4fcdb7f92227aa3c4b3
### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related.
mwong/climatetext-claim-evidence-pair-related-evaluation
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:us" ]
2022-04-21T09:26:24+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]}
2022-10-25T09:08:55+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us
### Dataset Summary This dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related.
[ "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|climate_text #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #region-us \n", "### Dataset Summary\nThis dataset is extracted from Climate Text dataset (URL pre-processed and, ready to evaluate.\nThe evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related." ]
74ddfcfd50ea96a8ebc1456bf5d8e63eb840a084
# Fashion-Mnist-C (Corrupted Fashion-Mnist) A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist. [Fashion-Mnist](https://github.com/zalandoresearch/fashion-mnist) is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for [MNIST-C](https://arxiv.org/abs/1906.02337). ## Corruptions The following corruptions are applied to the images, equivalently to MNIST-C: - **Noise** (shot noise and impulse noise) - **Blur** (glass and motion blur) - **Transformations** (shear, scale, rotate, brightness, contrast, saturate, inverse) In addition, we apply various **image flippings and turnings**: For fashion images, flipping the image does not change its label, and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented (e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs. Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle. ## Examples | Turned | Blurred | Rotated | Noise | Noise | Turned | | ------------- | ------------- | --------| --------- | -------- | --------- | | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_0.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_1.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_6.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_3.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_4.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_5.png" width="100" height="100"> | ## Citation If you use this dataset, please cite the following paper: ``` @inproceedings{Weiss2022SimpleTechniques, title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning}, author={Weiss, Michael and Tonella, Paolo}, booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis}, year={2022} } ``` Also, you may want to cite FMNIST and MNIST-C. ## Credits - Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their [paper](https://arxiv.org/abs/1906.02337) and [repo](https://github.com/google-research/mnist-c). - Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset [here](https://github.com/zalandoresearch/fashion-mnist).
mweiss/fashion_mnist_corrupted
[ "task_categories:image-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|fashion_mnist", "language:en", "license:mit", "arxiv:1906.02337", "region:us" ]
2022-04-21T10:34:02+00:00
{"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|fashion_mnist"], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "fashion-mnist-corrupted"}
2023-03-19T11:45:31+00:00
[ "1906.02337" ]
[ "en" ]
TAGS #task_categories-image-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|fashion_mnist #language-English #license-mit #arxiv-1906.02337 #region-us
Fashion-Mnist-C (Corrupted Fashion-Mnist) ========================================= A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist. Fashion-Mnist is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for MNIST-C. Corruptions ----------- The following corruptions are applied to the images, equivalently to MNIST-C: * Noise (shot noise and impulse noise) * Blur (glass and motion blur) * Transformations (shear, scale, rotate, brightness, contrast, saturate, inverse) In addition, we apply various image flippings and turnings: For fashion images, flipping the image does not change its label, and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented (e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs. Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle. Examples -------- If you use this dataset, please cite the following paper: Also, you may want to cite FMNIST and MNIST-C. Credits ------- * Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their paper and repo. * Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset here.
[]
[ "TAGS\n#task_categories-image-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|fashion_mnist #language-English #license-mit #arxiv-1906.02337 #region-us \n" ]
65bc9e7e7353fff750326c9523e384701934e530
# Dataset Card for Visual Genome ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://homes.cs.washington.edu/~ranjay/visualgenome/ - **Repository:** - **Paper:** https://doi.org/10.1007/s11263-016-0981-7 - **Leaderboard:** - **Point of Contact:** ranjaykrishna [at] gmail [dot] com ### Dataset Summary Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. From the paper: > Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” Visual Genome has: - 108,077 image - 5.4 Million Region Descriptions - 1.7 Million Visual Question Answers - 3.8 Million Object Instances - 2.8 Million Attributes - 2.3 Million Relationships From the paper: > Our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. ### Dataset Preprocessing ### Supported Tasks and Leaderboards ### Languages All of annotations use English as primary language. ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: ```python from datasets import load_dataset load_dataset("visual_genome", "region_description_v1.2.0") ``` #### region_descriptions An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "regions": [ { "region_id": 1382, "image_id": 1, "phrase": "the clock is green in colour", "x": 421, "y": 57, "width": 82, "height": 139 }, ... ] } ``` #### objects An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "objects": [ { "object_id": 1058498, "x": 421, "y": 91, "w": 79, "h": 339, "names": [ "clock" ], "synsets": [ "clock.n.01" ] }, ... ] } ``` #### attributes An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "attributes": [ { "object_id": 1058498, "x": 421, "y": 91, "w": 79, "h": 339, "names": [ "clock" ], "synsets": [ "clock.n.01" ], "attributes": [ "green", "tall" ] }, ... } ] ``` #### relationships An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "relationships": [ { "relationship_id": 15927, "predicate": "ON", "synsets": "['along.r.01']", "subject": { "object_id": 5045, "x": 119, "y": 338, "w": 274, "h": 192, "names": [ "shade" ], "synsets": [ "shade.n.01" ] }, "object": { "object_id": 5046, "x": 77, "y": 328, "w": 714, "h": 262, "names": [ "street" ], "synsets": [ "street.n.01" ] } } ... } ] ``` #### question_answers An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "qas": [ { "qa_id": 986768, "image_id": 1, "question": "What color is the clock?", "answer": "Green.", "a_objects": [], "q_objects": [] }, ... } ] ``` ### Data Fields When loading a specific configuration, users has to append a version dependent suffix: ```python from datasets import load_dataset load_dataset("visual_genome", "region_description_v1.2.0") ``` #### region_descriptions - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `regions`: Holds a list of `Region` dataclasses: - `region_id`: Unique numeric ID of the region. - `image_id`: Unique numeric ID of the image. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `width`: Bounding box width. - `height`: Bounding box height. #### objects - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `objects`: Holds a list of `Object` dataclasses: - `object_id`: Unique numeric ID of the object. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `w`: Bounding box width. - `h`: Bounding box height. - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg - `synsets`: List of `WordNet synsets`. #### attributes - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `attributes`: Holds a list of `Object` dataclasses: - `object_id`: Unique numeric ID of the region. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `w`: Bounding box width. - `h`: Bounding box height. - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg - `synsets`: List of `WordNet synsets`. - `attributes`: List of attributes associated with the object. #### relationships - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `relationships`: Holds a list of `Relationship` dataclasses: - `relationship_id`: Unique numeric ID of the object. - `predicate`: Predicate defining relationship between a subject and an object. - `synsets`: List of `WordNet synsets`. - `subject`: Object dataclass. See subsection on `objects`. - `object`: Object dataclass. See subsection on `objects`. #### question_answers - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `qas`: Holds a list of `Question-Answering` dataclasses: - `qa_id`: Unique numeric ID of the question-answer pair. - `image_id`: Unique numeric ID of the image. - `question`: Question. - `answer`: Answer. - `q_objects`: List of object dataclass associated with `question` field. See subsection on `objects`. - `a_objects`: List of object dataclass associated with `answer` field. See subsection on `objects`. ### Data Splits All the data is contained in training set. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? From the paper: > We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33, 000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs. Each HIT was designed such that workers manage to earn anywhere between $6-$8 per hour if they work continuously, in line with ethical research standards on Mechanical Turk (Salehi et al., 2015). Visual Genome HITs achieved a 94.1% retention rate, meaning that 94.1% of workers who completed one of our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States. The majority of our workers were between the ages of 25 and 34 years old. Our youngest contributor was 18 years and the oldest was 68 years old. We also had a near-balanced split of 54.15% male and 45.85% female workers. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Visual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License. ### Citation Information ```bibtex @article{Krishna2016VisualGC, title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations}, author={Ranjay Krishna and Yuke Zhu and Oliver Groth and Justin Johnson and Kenji Hata and Joshua Kravitz and Stephanie Chen and Yannis Kalantidis and Li-Jia Li and David A. Shamma and Michael S. Bernstein and Li Fei-Fei}, journal={International Journal of Computer Vision}, year={2017}, volume={123}, pages={32-73}, url={https://doi.org/10.1007/s11263-016-0981-7}, doi={10.1007/s11263-016-0981-7} } ``` ### Contributions Due to limitation of the dummy_data creation, we provide a `fix_generated_dummy_data.py` script that fix the dataset in-place. Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
visual_genome
[ "task_categories:image-to-text", "task_categories:object-detection", "task_categories:visual-question-answering", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-21T12:09:21+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["image-to-text", "object-detection", "visual-question-answering"], "task_ids": ["image-captioning"], "paperswithcode_id": "visual-genome", "pretty_name": "VisualGenome", "config_names": ["objects", "question_answers", "region_descriptions"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int32"}, {"name": "url", "dtype": "string"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "coco_id", "dtype": "int64"}, {"name": "flickr_id", "dtype": "int64"}, {"name": "regions", "list": [{"name": "region_id", "dtype": "int32"}, {"name": "image_id", "dtype": "int32"}, {"name": "phrase", "dtype": "string"}, {"name": "x", "dtype": "int32"}, {"name": "y", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}]}], "config_name": "region_descriptions_v1.0.0", "splits": [{"name": "train", "num_bytes": 260873884, "num_examples": 108077}], "download_size": 15304605295, "dataset_size": 260873884}}
2023-06-29T14:23:59+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-text #task_categories-object-detection #task_categories-visual-question-answering #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for Visual Genome ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Preprocessing - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Leaderboard: - Point of Contact: ranjaykrishna [at] gmail [dot] com ### Dataset Summary Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. From the paper: > Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” Visual Genome has: - 108,077 image - 5.4 Million Region Descriptions - 1.7 Million Visual Question Answers - 3.8 Million Object Instances - 2.8 Million Attributes - 2.3 Million Relationships From the paper: > Our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. ### Dataset Preprocessing ### Supported Tasks and Leaderboards ### Languages All of annotations use English as primary language. ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: #### region_descriptions An example of looks as follows. #### objects An example of looks as follows. #### attributes An example of looks as follows. #### relationships An example of looks as follows. #### question_answers An example of looks as follows. ### Data Fields When loading a specific configuration, users has to append a version dependent suffix: #### region_descriptions - 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' - 'image_id': Unique numeric ID of the image. - 'url': URL of source image. - 'width': Image width. - 'height': Image height. - 'coco_id': Id mapping to MSCOCO indexing. - 'flickr_id': Id mapping to Flicker indexing. - 'regions': Holds a list of 'Region' dataclasses: - 'region_id': Unique numeric ID of the region. - 'image_id': Unique numeric ID of the image. - 'x': x coordinate of bounding box's top left corner. - 'y': y coordinate of bounding box's top left corner. - 'width': Bounding box width. - 'height': Bounding box height. #### objects - 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' - 'image_id': Unique numeric ID of the image. - 'url': URL of source image. - 'width': Image width. - 'height': Image height. - 'coco_id': Id mapping to MSCOCO indexing. - 'flickr_id': Id mapping to Flicker indexing. - 'objects': Holds a list of 'Object' dataclasses: - 'object_id': Unique numeric ID of the object. - 'x': x coordinate of bounding box's top left corner. - 'y': y coordinate of bounding box's top left corner. - 'w': Bounding box width. - 'h': Bounding box height. - 'names': List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at URL - 'synsets': List of 'WordNet synsets'. #### attributes - 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' - 'image_id': Unique numeric ID of the image. - 'url': URL of source image. - 'width': Image width. - 'height': Image height. - 'coco_id': Id mapping to MSCOCO indexing. - 'flickr_id': Id mapping to Flicker indexing. - 'attributes': Holds a list of 'Object' dataclasses: - 'object_id': Unique numeric ID of the region. - 'x': x coordinate of bounding box's top left corner. - 'y': y coordinate of bounding box's top left corner. - 'w': Bounding box width. - 'h': Bounding box height. - 'names': List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at URL - 'synsets': List of 'WordNet synsets'. - 'attributes': List of attributes associated with the object. #### relationships - 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' - 'image_id': Unique numeric ID of the image. - 'url': URL of source image. - 'width': Image width. - 'height': Image height. - 'coco_id': Id mapping to MSCOCO indexing. - 'flickr_id': Id mapping to Flicker indexing. - 'relationships': Holds a list of 'Relationship' dataclasses: - 'relationship_id': Unique numeric ID of the object. - 'predicate': Predicate defining relationship between a subject and an object. - 'synsets': List of 'WordNet synsets'. - 'subject': Object dataclass. See subsection on 'objects'. - 'object': Object dataclass. See subsection on 'objects'. #### question_answers - 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' - 'image_id': Unique numeric ID of the image. - 'url': URL of source image. - 'width': Image width. - 'height': Image height. - 'coco_id': Id mapping to MSCOCO indexing. - 'flickr_id': Id mapping to Flicker indexing. - 'qas': Holds a list of 'Question-Answering' dataclasses: - 'qa_id': Unique numeric ID of the question-answer pair. - 'image_id': Unique numeric ID of the image. - 'question': Question. - 'answer': Answer. - 'q_objects': List of object dataclass associated with 'question' field. See subsection on 'objects'. - 'a_objects': List of object dataclass associated with 'answer' field. See subsection on 'objects'. ### Data Splits All the data is contained in training set. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? From the paper: > We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33, 000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs. Each HIT was designed such that workers manage to earn anywhere between $6-$8 per hour if they work continuously, in line with ethical research standards on Mechanical Turk (Salehi et al., 2015). Visual Genome HITs achieved a 94.1% retention rate, meaning that 94.1% of workers who completed one of our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States. The majority of our workers were between the ages of 25 and 34 years old. Our youngest contributor was 18 years and the oldest was 68 years old. We also had a near-balanced split of 54.15% male and 45.85% female workers. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Visual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License. ### Contributions Due to limitation of the dummy_data creation, we provide a 'fix_generated_dummy_data.py' script that fix the dataset in-place. Thanks to @thomasw21 for adding this dataset.
[ "# Dataset Card for Visual Genome", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard:\n- Point of Contact: ranjaykrishna [at] gmail [dot] com", "### Dataset Summary\n\nVisual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language.\n\nFrom the paper:\n> Despite progress in perceptual tasks such as\nimage classification, computers still perform poorly on\ncognitive tasks such as image description and question\nanswering. Cognition is core to tasks that involve not\njust recognizing, but reasoning about our visual world.\nHowever, models used to tackle the rich content in images for cognitive tasks are still being trained using the\nsame datasets designed for perceptual tasks. To achieve\nsuccess at cognitive tasks, models need to understand\nthe interactions and relationships between objects in an\nimage. When asked “What vehicle is the person riding?”,\ncomputers will need to identify the objects in an image\nas well as the relationships riding(man, carriage) and\npulling(horse, carriage) to answer correctly that “the\nperson is riding a horse-drawn carriage.”\n\nVisual Genome has:\n - 108,077 image\n - 5.4 Million Region Descriptions\n - 1.7 Million Visual Question Answers\n - 3.8 Million Object Instances\n - 2.8 Million Attributes\n - 2.3 Million Relationships\n\nFrom the paper:\n> Our dataset contains over 108K images where each\nimage has an average of 35 objects, 26 attributes, and 21\npairwise relationships between objects. We canonicalize\nthe objects, attributes, relationships, and noun phrases\nin region descriptions and questions answer pairs to\nWordNet synsets.", "### Dataset Preprocessing", "### Supported Tasks and Leaderboards", "### Languages\n\nAll of annotations use English as primary language.", "## Dataset Structure", "### Data Instances\n\nWhen loading a specific configuration, users has to append a version dependent suffix:", "#### region_descriptions\n\nAn example of looks as follows.", "#### objects\n\nAn example of looks as follows.", "#### attributes\n\nAn example of looks as follows.", "#### relationships\n\nAn example of looks as follows.", "#### question_answers\n\nAn example of looks as follows.", "### Data Fields\n\nWhen loading a specific configuration, users has to append a version dependent suffix:", "#### region_descriptions\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'regions': Holds a list of 'Region' dataclasses:\n - 'region_id': Unique numeric ID of the region.\n - 'image_id': Unique numeric ID of the image.\n - 'x': x coordinate of bounding box's top left corner.\n - 'y': y coordinate of bounding box's top left corner.\n - 'width': Bounding box width.\n - 'height': Bounding box height.", "#### objects\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'objects': Holds a list of 'Object' dataclasses:\n - 'object_id': Unique numeric ID of the object.\n - 'x': x coordinate of bounding box's top left corner.\n - 'y': y coordinate of bounding box's top left corner.\n - 'w': Bounding box width.\n - 'h': Bounding box height.\n - 'names': List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at URL\n - 'synsets': List of 'WordNet synsets'.", "#### attributes\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'attributes': Holds a list of 'Object' dataclasses:\n - 'object_id': Unique numeric ID of the region.\n - 'x': x coordinate of bounding box's top left corner.\n - 'y': y coordinate of bounding box's top left corner.\n - 'w': Bounding box width.\n - 'h': Bounding box height.\n - 'names': List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at URL\n - 'synsets': List of 'WordNet synsets'.\n - 'attributes': List of attributes associated with the object.", "#### relationships\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'relationships': Holds a list of 'Relationship' dataclasses:\n - 'relationship_id': Unique numeric ID of the object.\n - 'predicate': Predicate defining relationship between a subject and an object.\n - 'synsets': List of 'WordNet synsets'.\n - 'subject': Object dataclass. See subsection on 'objects'.\n - 'object': Object dataclass. See subsection on 'objects'.", "#### question_answers\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'qas': Holds a list of 'Question-Answering' dataclasses:\n - 'qa_id': Unique numeric ID of the question-answer pair.\n - 'image_id': Unique numeric ID of the image.\n - 'question': Question.\n - 'answer': Answer.\n - 'q_objects': List of object dataclass associated with 'question' field. See subsection on 'objects'.\n - 'a_objects': List of object dataclass associated with 'answer' field. See subsection on 'objects'.", "### Data Splits\n\nAll the data is contained in training set.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nFrom the paper:\n> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over\n33, 000 unique workers contributed to the dataset. The\ndataset was collected over the course of 6 months after\n15 months of experimentation and iteration on the data\nrepresentation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where\neach HIT involved creating descriptions, questions and\nanswers, or region graphs. Each HIT was designed such\nthat workers manage to earn anywhere between $6-$8\nper hour if they work continuously, in line with ethical\nresearch standards on Mechanical Turk (Salehi et al.,\n2015). Visual Genome HITs achieved a 94.1% retention\nrate, meaning that 94.1% of workers who completed one\nof our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States.\nThe majority of our workers were\nbetween the ages of 25 and 34 years old. Our youngest\ncontributor was 18 years and the oldest was 68 years\nold. We also had a near-balanced split of 54.15% male\nand 45.85% female workers.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nVisual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\nDue to limitation of the dummy_data creation, we provide a 'fix_generated_dummy_data.py' script that fix the dataset in-place.\n\nThanks to @thomasw21 for adding this dataset." ]
[ "TAGS\n#task_categories-image-to-text #task_categories-object-detection #task_categories-visual-question-answering #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for Visual Genome", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard:\n- Point of Contact: ranjaykrishna [at] gmail [dot] com", "### Dataset Summary\n\nVisual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language.\n\nFrom the paper:\n> Despite progress in perceptual tasks such as\nimage classification, computers still perform poorly on\ncognitive tasks such as image description and question\nanswering. Cognition is core to tasks that involve not\njust recognizing, but reasoning about our visual world.\nHowever, models used to tackle the rich content in images for cognitive tasks are still being trained using the\nsame datasets designed for perceptual tasks. To achieve\nsuccess at cognitive tasks, models need to understand\nthe interactions and relationships between objects in an\nimage. When asked “What vehicle is the person riding?”,\ncomputers will need to identify the objects in an image\nas well as the relationships riding(man, carriage) and\npulling(horse, carriage) to answer correctly that “the\nperson is riding a horse-drawn carriage.”\n\nVisual Genome has:\n - 108,077 image\n - 5.4 Million Region Descriptions\n - 1.7 Million Visual Question Answers\n - 3.8 Million Object Instances\n - 2.8 Million Attributes\n - 2.3 Million Relationships\n\nFrom the paper:\n> Our dataset contains over 108K images where each\nimage has an average of 35 objects, 26 attributes, and 21\npairwise relationships between objects. We canonicalize\nthe objects, attributes, relationships, and noun phrases\nin region descriptions and questions answer pairs to\nWordNet synsets.", "### Dataset Preprocessing", "### Supported Tasks and Leaderboards", "### Languages\n\nAll of annotations use English as primary language.", "## Dataset Structure", "### Data Instances\n\nWhen loading a specific configuration, users has to append a version dependent suffix:", "#### region_descriptions\n\nAn example of looks as follows.", "#### objects\n\nAn example of looks as follows.", "#### attributes\n\nAn example of looks as follows.", "#### relationships\n\nAn example of looks as follows.", "#### question_answers\n\nAn example of looks as follows.", "### Data Fields\n\nWhen loading a specific configuration, users has to append a version dependent suffix:", "#### region_descriptions\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'regions': Holds a list of 'Region' dataclasses:\n - 'region_id': Unique numeric ID of the region.\n - 'image_id': Unique numeric ID of the image.\n - 'x': x coordinate of bounding box's top left corner.\n - 'y': y coordinate of bounding box's top left corner.\n - 'width': Bounding box width.\n - 'height': Bounding box height.", "#### objects\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'objects': Holds a list of 'Object' dataclasses:\n - 'object_id': Unique numeric ID of the object.\n - 'x': x coordinate of bounding box's top left corner.\n - 'y': y coordinate of bounding box's top left corner.\n - 'w': Bounding box width.\n - 'h': Bounding box height.\n - 'names': List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at URL\n - 'synsets': List of 'WordNet synsets'.", "#### attributes\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'attributes': Holds a list of 'Object' dataclasses:\n - 'object_id': Unique numeric ID of the region.\n - 'x': x coordinate of bounding box's top left corner.\n - 'y': y coordinate of bounding box's top left corner.\n - 'w': Bounding box width.\n - 'h': Bounding box height.\n - 'names': List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at URL\n - 'synsets': List of 'WordNet synsets'.\n - 'attributes': List of attributes associated with the object.", "#### relationships\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'relationships': Holds a list of 'Relationship' dataclasses:\n - 'relationship_id': Unique numeric ID of the object.\n - 'predicate': Predicate defining relationship between a subject and an object.\n - 'synsets': List of 'WordNet synsets'.\n - 'subject': Object dataclass. See subsection on 'objects'.\n - 'object': Object dataclass. See subsection on 'objects'.", "#### question_answers\n\n- 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'image_id': Unique numeric ID of the image.\n- 'url': URL of source image.\n- 'width': Image width.\n- 'height': Image height.\n- 'coco_id': Id mapping to MSCOCO indexing.\n- 'flickr_id': Id mapping to Flicker indexing.\n- 'qas': Holds a list of 'Question-Answering' dataclasses:\n - 'qa_id': Unique numeric ID of the question-answer pair.\n - 'image_id': Unique numeric ID of the image.\n - 'question': Question.\n - 'answer': Answer.\n - 'q_objects': List of object dataclass associated with 'question' field. See subsection on 'objects'.\n - 'a_objects': List of object dataclass associated with 'answer' field. See subsection on 'objects'.", "### Data Splits\n\nAll the data is contained in training set.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nFrom the paper:\n> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over\n33, 000 unique workers contributed to the dataset. The\ndataset was collected over the course of 6 months after\n15 months of experimentation and iteration on the data\nrepresentation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where\neach HIT involved creating descriptions, questions and\nanswers, or region graphs. Each HIT was designed such\nthat workers manage to earn anywhere between $6-$8\nper hour if they work continuously, in line with ethical\nresearch standards on Mechanical Turk (Salehi et al.,\n2015). Visual Genome HITs achieved a 94.1% retention\nrate, meaning that 94.1% of workers who completed one\nof our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States.\nThe majority of our workers were\nbetween the ages of 25 and 34 years old. Our youngest\ncontributor was 18 years and the oldest was 68 years\nold. We also had a near-balanced split of 54.15% male\nand 45.85% female workers.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nVisual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\nDue to limitation of the dummy_data creation, we provide a 'fix_generated_dummy_data.py' script that fix the dataset in-place.\n\nThanks to @thomasw21 for adding this dataset." ]
b31afad97a9fada96522cc2f5b080338d4a3f7cd
Named Entity Recognition for COVID-19 Bio Entities The dataset was taken from https://github.com/davidcampos/covid19-corpus Dataset The dataset was then split into several datasets each one representing one entity. Namely, Disorder, Species, Chemical or Drug, Gene and Protein, Enzyme, Anatomy, Biological Process, Molecular Function, Cellular Component, Pathway and microRNA. Moreover, another dataset is also created with all those aforementioned that are non-overlapping in nature. Dataset Formats The datasets are available in two formats IOB and Spacy's JSONL format. IOB : https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/BIO SpaCy JSONL: https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/SpaCy
tsantosh7/COVID-19_Annotations
[ "license:cc", "region:us" ]
2022-04-21T12:57:27+00:00
{"license": "cc"}
2022-04-21T13:03:06+00:00
[]
[]
TAGS #license-cc #region-us
Named Entity Recognition for COVID-19 Bio Entities The dataset was taken from URL Dataset The dataset was then split into several datasets each one representing one entity. Namely, Disorder, Species, Chemical or Drug, Gene and Protein, Enzyme, Anatomy, Biological Process, Molecular Function, Cellular Component, Pathway and microRNA. Moreover, another dataset is also created with all those aforementioned that are non-overlapping in nature. Dataset Formats The datasets are available in two formats IOB and Spacy's JSONL format. IOB : URL SpaCy JSONL: URL
[]
[ "TAGS\n#license-cc #region-us \n" ]
b3bbb554daa84ecc2b8c5bfd1b861a55fbabf639
# PIE Dataset Card for "conll2003" This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the [CoNLL 2003 Huggingface dataset loading script](https://huggingface.co/datasets/conll2003). ## Data Schema The document type for this dataset is `CoNLL2003Document` which defines the following data fields: - `text` (str) - `id` (str, optional) - `metadata` (dictionary, optional) and the following annotation layers: - `entities` (annotation type: `LabeledSpan`, target: `text`) See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/annotations.py) for the annotation type definitions. ## Document Converters The dataset provides document converters for the following target document types: - `pytorch_ie.documents.TextDocumentWithLabeledSpans` See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type definitions.
pie/conll2003
[ "region:us" ]
2022-04-21T13:15:40+00:00
{}
2024-01-03T13:20:14+00:00
[]
[]
TAGS #region-us
# PIE Dataset Card for "conll2003" This is a PyTorch-IE wrapper for the CoNLL 2003 Huggingface dataset loading script. ## Data Schema The document type for this dataset is 'CoNLL2003Document' which defines the following data fields: - 'text' (str) - 'id' (str, optional) - 'metadata' (dictionary, optional) and the following annotation layers: - 'entities' (annotation type: 'LabeledSpan', target: 'text') See here for the annotation type definitions. ## Document Converters The dataset provides document converters for the following target document types: - 'pytorch_ie.documents.TextDocumentWithLabeledSpans' See here for the document type definitions.
[ "# PIE Dataset Card for \"conll2003\"\n\nThis is a PyTorch-IE wrapper for the\nCoNLL 2003 Huggingface dataset loading script.", "## Data Schema\n\nThe document type for this dataset is 'CoNLL2003Document' which defines the following data fields:\n\n- 'text' (str)\n- 'id' (str, optional)\n- 'metadata' (dictionary, optional)\n\nand the following annotation layers:\n\n- 'entities' (annotation type: 'LabeledSpan', target: 'text')\n\nSee here for the annotation type definitions.", "## Document Converters\n\nThe dataset provides document converters for the following target document types:\n\n- 'pytorch_ie.documents.TextDocumentWithLabeledSpans'\n\nSee here for the document type\ndefinitions." ]
[ "TAGS\n#region-us \n", "# PIE Dataset Card for \"conll2003\"\n\nThis is a PyTorch-IE wrapper for the\nCoNLL 2003 Huggingface dataset loading script.", "## Data Schema\n\nThe document type for this dataset is 'CoNLL2003Document' which defines the following data fields:\n\n- 'text' (str)\n- 'id' (str, optional)\n- 'metadata' (dictionary, optional)\n\nand the following annotation layers:\n\n- 'entities' (annotation type: 'LabeledSpan', target: 'text')\n\nSee here for the annotation type definitions.", "## Document Converters\n\nThe dataset provides document converters for the following target document types:\n\n- 'pytorch_ie.documents.TextDocumentWithLabeledSpans'\n\nSee here for the document type\ndefinitions." ]
f4c8f95b2143cc3d276df440d57f66e9e4ab1346
# Dataset Card for RVL-CDIP ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/) - **Repository:** - **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058) - **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip) - **Point of Contact:** [Adam W. Harley](mailto:[email protected]) ### Dataset Summary The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip). ### Languages All the classes and documents use English as their primary language. ## Dataset Structure ### Data Instances A sample from the training set is provided below : ``` { 'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>, 'label': 15 } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing a document. - `label`: an `int` classification label. <details> <summary>Class Label Mappings</summary> ```json { "0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo" } ``` </details> ### Data Splits | |train|test|validation| |----------|----:|----:|---------:| |# of examples|320000|40000|40000| The dataset was split in proportions similar to those of ImageNet. - 320000 images were used for training, - 40000 images for validation, and - 40000 images for testing. ## Dataset Creation ### Curation Rationale From the paper: > This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 document images across 16 categories, useful for training new CNNs for document analysis. ### Source Data #### Initial Data Collection and Normalization The same as in the IIT-CDIP collection. #### Who are the source language producers? The same as in the IIT-CDIP collection. ### Annotations #### Annotation process The same as in the IIT-CDIP collection. #### Who are the annotators? The same as in the IIT-CDIP collection. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis. ### Licensing Information RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/). ### Citation Information ```bibtex @inproceedings{harley2015icdar, title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval}, author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis}, booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}}, year = {2015} } ``` ### Contributions Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset.
aharley/rvl_cdip
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|iit_cdip", "language:en", "license:other", "arxiv:1502.07058", "region:us" ]
2022-04-21T13:21:01+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|iit_cdip"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "rvl-cdip", "pretty_name": "RVL-CDIP", "viewer": false, "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}], "splits": [{"name": "train", "num_bytes": 38816373360, "num_examples": 320000}, {"name": "test", "num_bytes": 4863300853, "num_examples": 40000}, {"name": "validation", "num_bytes": 4868685208, "num_examples": 40000}], "download_size": 38779484559, "dataset_size": 48548359421}}
2023-05-02T08:06:16+00:00
[ "1502.07058" ]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|iit_cdip #language-English #license-other #arxiv-1502.07058 #region-us
Dataset Card for RVL-CDIP ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: The RVL-CDIP Dataset * Repository: * Paper: Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval * Leaderboard: RVL-CDIP leaderboard * Point of Contact: Adam W. Harley ### Dataset Summary The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. ### Supported Tasks and Leaderboards * 'image-classification': The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available here. ### Languages All the classes and documents use English as their primary language. Dataset Structure ----------------- ### Data Instances A sample from the training set is provided below : ### Data Fields * 'image': A 'PIL.Image.Image' object containing a document. * 'label': an 'int' classification label. Class Label Mappings ### Data Splits The dataset was split in proportions similar to those of ImageNet. * 320000 images were used for training, * 40000 images for validation, and * 40000 images for testing. Dataset Creation ---------------- ### Curation Rationale From the paper: > > This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 > document images across 16 categories, useful for training new CNNs for document analysis. > > > ### Source Data #### Initial Data Collection and Normalization The same as in the IIT-CDIP collection. #### Who are the source language producers? The same as in the IIT-CDIP collection. ### Annotations #### Annotation process The same as in the IIT-CDIP collection. #### Who are the annotators? The same as in the IIT-CDIP collection. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis. ### Licensing Information RVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here. ### Contributions Thanks to @dnaveenr for adding this dataset.
[ "### Dataset Summary\n\n\nThe RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available here.", "### Languages\n\n\nAll the classes and documents use English as their primary language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below :", "### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing a document.\n* 'label': an 'int' classification label.\n\n\n\nClass Label Mappings", "### Data Splits\n\n\n\nThe dataset was split in proportions similar to those of ImageNet.\n\n\n* 320000 images were used for training,\n* 40000 images for validation, and\n* 40000 images for testing.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000\n> document images across 16 categories, useful for training new CNNs for document analysis.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe same as in the IIT-CDIP collection.", "#### Who are the source language producers?\n\n\nThe same as in the IIT-CDIP collection.", "### Annotations", "#### Annotation process\n\n\nThe same as in the IIT-CDIP collection.", "#### Who are the annotators?\n\n\nThe same as in the IIT-CDIP collection.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.", "### Licensing Information\n\n\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.", "### Contributions\n\n\nThanks to @dnaveenr for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|iit_cdip #language-English #license-other #arxiv-1502.07058 #region-us \n", "### Dataset Summary\n\n\nThe RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available here.", "### Languages\n\n\nAll the classes and documents use English as their primary language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below :", "### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing a document.\n* 'label': an 'int' classification label.\n\n\n\nClass Label Mappings", "### Data Splits\n\n\n\nThe dataset was split in proportions similar to those of ImageNet.\n\n\n* 320000 images were used for training,\n* 40000 images for validation, and\n* 40000 images for testing.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000\n> document images across 16 categories, useful for training new CNNs for document analysis.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe same as in the IIT-CDIP collection.", "#### Who are the source language producers?\n\n\nThe same as in the IIT-CDIP collection.", "### Annotations", "#### Annotation process\n\n\nThe same as in the IIT-CDIP collection.", "#### Who are the annotators?\n\n\nThe same as in the IIT-CDIP collection.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.", "### Licensing Information\n\n\nRVL-CDIP is a subset of IIT-CDIP, which came from the Legacy Tobacco Document Library, for which license information can be found here.", "### Contributions\n\n\nThanks to @dnaveenr for adding this dataset." ]
36076b03a64c3dc168fa7222da61de07b6eac67e
# Dataset Card for Goud summarization dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[Needs More Information] - **Repository:**[Needs More Information] - **Paper:**[Goud.ma: a News Article Dataset for Summarization in Moroccan Darija](https://openreview.net/forum?id=BMVq5MELb9) - **Leaderboard:**[Needs More Information] - **Point of Contact:**[Needs More Information] ### Dataset Summary Goud-sum contains 158k articles and their headlines extracted from [Goud.ma](https://www.goud.ma/) news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija). ### Supported Tasks and Leaderboards Text Summarization ### Languages * Moroccan Arabic (Darija) * Modern Standard Arabic ## Dataset Structure ### Data Instances The dataset consists of article-headline pairs in string format. ### Data Fields * article: a string containing the body of the news article * headline: a string containing the article's headline * categories: a list of string of article categories ### Data Splits Goud-sum dataset has 3 splits: _train_, _validation_, and _test_. Below are the number of instances in each split. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 139,288 | | Validation | 9,497 | | Test | 9,497 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The text was written by journalists at [Goud](https://www.goud.ma/). ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{issam2022goudma, title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija}, author={Abderrahmane Issam and Khalil Mrini}, booktitle={3rd Workshop on African Natural Language Processing}, year={2022}, url={https://openreview.net/forum?id=BMVq5MELb9} } ``` ### Contributions Thanks to [@issam9](https://github.com/issam9) and [@KhalilMrini](https://github.com/KhalilMrini) for adding this dataset.
Goud/Goud-sum
[ "task_categories:summarization", "task_ids:news-articles-headline-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "size_categories:100K<n<1M", "source_datasets:original", "region:us" ]
2022-04-21T14:25:00+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": [], "multilinguality": [], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-headline-generation"], "pretty_name": "Goud-sum"}
2022-07-04T15:02:36+00:00
[]
[]
TAGS #task_categories-summarization #task_ids-news-articles-headline-generation #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-100K<n<1M #source_datasets-original #region-us
Dataset Card for Goud summarization dataset =========================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: * Paper:URL: a News Article Dataset for Summarization in Moroccan Darija * Leaderboard: * Point of Contact: ### Dataset Summary Goud-sum contains 158k articles and their headlines extracted from URL news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija). ### Supported Tasks and Leaderboards Text Summarization ### Languages * Moroccan Arabic (Darija) * Modern Standard Arabic Dataset Structure ----------------- ### Data Instances The dataset consists of article-headline pairs in string format. ### Data Fields * article: a string containing the body of the news article * headline: a string containing the article's headline * categories: a list of string of article categories ### Data Splits Goud-sum dataset has 3 splits: *train*, *validation*, and *test*. Below are the number of instances in each split. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? The text was written by journalists at Goud. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @issam9 and @KhalilMrini for adding this dataset.
[ "### Dataset Summary\n\n\nGoud-sum contains 158k articles and their headlines extracted from URL news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija).", "### Supported Tasks and Leaderboards\n\n\nText Summarization", "### Languages\n\n\n* Moroccan Arabic (Darija)\n* Modern Standard Arabic\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe dataset consists of article-headline pairs in string format.", "### Data Fields\n\n\n* article: a string containing the body of the news article\n* headline: a string containing the article's headline\n* categories: a list of string of article categories", "### Data Splits\n\n\nGoud-sum dataset has 3 splits: *train*, *validation*, and *test*. Below are the number of instances in each split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe text was written by journalists at Goud.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @issam9 and @KhalilMrini for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-headline-generation #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-100K<n<1M #source_datasets-original #region-us \n", "### Dataset Summary\n\n\nGoud-sum contains 158k articles and their headlines extracted from URL news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija).", "### Supported Tasks and Leaderboards\n\n\nText Summarization", "### Languages\n\n\n* Moroccan Arabic (Darija)\n* Modern Standard Arabic\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe dataset consists of article-headline pairs in string format.", "### Data Fields\n\n\n* article: a string containing the body of the news article\n* headline: a string containing the article's headline\n* categories: a list of string of article categories", "### Data Splits\n\n\nGoud-sum dataset has 3 splits: *train*, *validation*, and *test*. Below are the number of instances in each split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe text was written by journalists at Goud.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @issam9 and @KhalilMrini for adding this dataset." ]
24eab2c29829f2672c4a9516f0d7aa750b88ba61
Dataset for API: https://github.com/eleldar/Translation Test English-Russian dataset: ``` DatasetDict({ normal: Dataset({ features: ['en', 'ru'], num_rows: 2009 }) short: Dataset({ features: ['en', 'ru'], num_rows: 2664 }) train: Dataset({ features: ['en', 'ru'], num_rows: 1660 }) validation: Dataset({ features: ['en', 'ru'], num_rows: 208 }) test: Dataset({ features: ['en', 'ru'], num_rows: 4170 }) }) ``` The dataset get from tables: * https://github.com/eleldar/Translator/blob/master/test_dataset/flores101_dataset/101_languages.xlsx?raw=true * https://github.com/eleldar/Translator/blob/master/test_dataset/normal.xlsx?raw=true * https://github.com/eleldar/Translator/blob/master/test_dataset/corrected_vocab.xlsx?raw=true
eleldar/sub_train-normal_tests-datasets
[ "region:us" ]
2022-04-21T14:25:32+00:00
{}
2022-06-16T10:19:47+00:00
[]
[]
TAGS #region-us
Dataset for API: URL Test English-Russian dataset: The dataset get from tables: * URL * URL * URL
[]
[ "TAGS\n#region-us \n" ]
1f2761557622d85a47d719882e5e8654f2c4dec1
# GEM Submission Submission name: SeqPlan-SportSett
GEM-submissions/ratishsp__seqplan-sportsett__1650556902
[ "benchmark:gem", "evaluation", "benchmark", "region:us" ]
2022-04-21T15:01:43+00:00
{"benchmark": "gem", "type": "prediction", "submission_name": "SeqPlan-SportSett", "tags": ["evaluation", "benchmark"]}
2022-04-21T15:01:45+00:00
[]
[]
TAGS #benchmark-gem #evaluation #benchmark #region-us
# GEM Submission Submission name: SeqPlan-SportSett
[ "# GEM Submission\n\nSubmission name: SeqPlan-SportSett" ]
[ "TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n", "# GEM Submission\n\nSubmission name: SeqPlan-SportSett" ]
1f2a598128b862851ba63f35a9d7c277c005e2d7
## Overview Original dataset available [here](https://gluebenchmark.com/diagnostics). ## Dataset curation Filled in the empty rows of columns "lexical semantics", "predicate-argument structure", "logic", "knowledge" with empty string `""`. Labels are encoded as follows ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` ## Code to create dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset df = pd.read_csv("<path to file>/diagnostic-full.tsv", sep="\t") # column names to lower df.columns = df.columns.str.lower() # fill na assert df["label"].isna().sum() == 0 df = df.fillna("") # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "lexical semantics": Value(dtype="string", id=None), "predicate-argument structure": Value(dtype="string", id=None), "logic": Value(dtype="string", id=None), "knowledge": Value(dtype="string", id=None), "domain": Value(dtype="string", id=None), "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) dataset = Dataset.from_pandas(df, features=features) dataset.push_to_hub("glue_diagnostics", token="<token>", split="test") ```
pietrolesci/glue_diagnostics
[ "region:us" ]
2022-04-21T15:46:38+00:00
{}
2022-04-21T15:51:56+00:00
[]
[]
TAGS #region-us
## Overview Original dataset available here. ## Dataset curation Filled in the empty rows of columns "lexical semantics", "predicate-argument structure", "logic", "knowledge" with empty string '""'. Labels are encoded as follows ## Code to create dataset
[ "## Overview\nOriginal dataset available here.", "## Dataset curation\nFilled in the empty rows of columns \"lexical semantics\", \"predicate-argument structure\", \n\"logic\", \"knowledge\" with empty string '\"\"'.\nLabels are encoded as follows", "## Code to create dataset" ]
[ "TAGS\n#region-us \n", "## Overview\nOriginal dataset available here.", "## Dataset curation\nFilled in the empty rows of columns \"lexical semantics\", \"predicate-argument structure\", \n\"logic\", \"knowledge\" with empty string '\"\"'.\nLabels are encoded as follows", "## Code to create dataset" ]
bb68655c6b6f1431cdf2b90239cbf2fb5e52f3cd
# Dataset Card for librispeech_asr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12) - **Repository:** [Needs More Information] - **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-other) - **Point of Contact:** [Daniel Povey](mailto:[email protected]) ### Dataset Summary LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. ### Supported Tasks and Leaderboards - `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean and ranks models based on their WER. ### Languages The audio is in English. There are two configurations: `clean` and `other`. The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on a different dataset, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other". ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'chapter_id': 141231, 'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'id': '1272-141231-0000', 'speaker_id': 1272, 'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'} ``` ### Data Fields - file: A path to the downloaded audio file in .flac format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - id: unique id of the data sample. - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples. - chapter_id: id of the audiobook chapter which includes the transcription. ### Data Splits The size of the corpus makes it impractical, or at least inconvenient for some users, to distribute it as a single large archive. Thus the training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively. A simple automatic procedure was used to select the audio in the first two sets to be, on average, of higher recording quality and with accents closer to US English. An acoustic model was trained on WSJ’s si-84 data subset and was used to recognize the audio in the corpus, using a bigram LM estimated on the text of the respective books. We computed the Word Error Rate (WER) of this automatic transcript relative to our reference transcripts obtained from the book texts. The speakers in the corpus were ranked according to the WER of the WSJ model’s transcripts, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other". For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360 respectively accounting for 100h and 360h of the training data. For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech. | | Train.500 | Train.360 | Train.100 | Valid | Test | | ----- | ------ | ----- | ---- | ---- | ---- | | clean | - | 104014 | 28539 | 2703 | 2620| | other | 148688 | - | - | 2864 | 2939 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
patrickvonplaten/librispeech_asr_self_contained
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-21T16:06:19+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "task_ids": ["audio-speaker-identification"], "paperswithcode_id": "librispeech-1", "pretty_name": "LibriSpeech"}
2022-10-24T16:48:37+00:00
[]
[ "en" ]
TAGS #task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
Dataset Card for librispeech\_asr ================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: LibriSpeech ASR corpus * Repository: * Paper: LibriSpeech: An ASR Corpus Based On Public Domain Audio Books * Leaderboard: Paperswithcode Leaderboard * Point of Contact: Daniel Povey ### Dataset Summary LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. ### Supported Tasks and Leaderboards * 'automatic-speech-recognition', 'audio-speaker-identification': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at URL and ranks models based on their WER. ### Languages The audio is in English. There are two configurations: 'clean' and 'other'. The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on a different dataset, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other". Dataset Structure ----------------- ### Data Instances A typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided. ### Data Fields * file: A path to the downloaded audio file in .flac format. * audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * text: the transcription of the audio file. * id: unique id of the data sample. * speaker\_id: unique id of the speaker. The same speaker id can be found for multiple data samples. * chapter\_id: id of the audiobook chapter which includes the transcription. ### Data Splits The size of the corpus makes it impractical, or at least inconvenient for some users, to distribute it as a single large archive. Thus the training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively. A simple automatic procedure was used to select the audio in the first two sets to be, on average, of higher recording quality and with accents closer to US English. An acoustic model was trained on WSJ’s si-84 data subset and was used to recognize the audio in the corpus, using a bigram LM estimated on the text of the respective books. We computed the Word Error Rate (WER) of this automatic transcript relative to our reference transcripts obtained from the book texts. The speakers in the corpus were ranked according to the WER of the WSJ model’s transcripts, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other". For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360 respectively accounting for 100h and 360h of the training data. For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. ### Licensing Information CC BY 4.0 ### Contributions Thanks to @patrickvonplaten for adding this dataset.
[ "### Dataset Summary\n\n\nLibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition', 'audio-speaker-identification': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at URL and ranks models based on their WER.", "### Languages\n\n\nThe audio is in English. There are two configurations: 'clean' and 'other'.\nThe speakers in the corpus were ranked according to the WER of the transcripts of a model trained on\na different dataset, and were divided roughly in the middle,\nwith the lower-WER speakers designated as \"clean\" and the higher WER speakers designated as \"other\".\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.", "### Data Fields\n\n\n* file: A path to the downloaded audio file in .flac format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* id: unique id of the data sample.\n* speaker\\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.\n* chapter\\_id: id of the audiobook chapter which includes the transcription.", "### Data Splits\n\n\nThe size of the corpus makes it impractical, or at least inconvenient\nfor some users, to distribute it as a single large archive. Thus the\ntraining portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.\nA simple automatic\nprocedure was used to select the audio in the first two sets to be, on\naverage, of higher recording quality and with accents closer to US\nEnglish. An acoustic model was trained on WSJ’s si-84 data subset\nand was used to recognize the audio in the corpus, using a bigram\nLM estimated on the text of the respective books. We computed the\nWord Error Rate (WER) of this automatic transcript relative to our\nreference transcripts obtained from the book texts.\nThe speakers in the corpus were ranked according to the WER of\nthe WSJ model’s transcripts, and were divided roughly in the middle,\nwith the lower-WER speakers designated as \"clean\" and the higher-WER speakers designated as \"other\".\n\n\nFor \"clean\", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360\nrespectively accounting for 100h and 360h of the training data.\nFor \"other\", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.", "### Licensing Information\n\n\nCC BY 4.0", "### Contributions\n\n\nThanks to @patrickvonplaten for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nLibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition', 'audio-speaker-identification': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at URL and ranks models based on their WER.", "### Languages\n\n\nThe audio is in English. There are two configurations: 'clean' and 'other'.\nThe speakers in the corpus were ranked according to the WER of the transcripts of a model trained on\na different dataset, and were divided roughly in the middle,\nwith the lower-WER speakers designated as \"clean\" and the higher WER speakers designated as \"other\".\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.", "### Data Fields\n\n\n* file: A path to the downloaded audio file in .flac format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* id: unique id of the data sample.\n* speaker\\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.\n* chapter\\_id: id of the audiobook chapter which includes the transcription.", "### Data Splits\n\n\nThe size of the corpus makes it impractical, or at least inconvenient\nfor some users, to distribute it as a single large archive. Thus the\ntraining portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.\nA simple automatic\nprocedure was used to select the audio in the first two sets to be, on\naverage, of higher recording quality and with accents closer to US\nEnglish. An acoustic model was trained on WSJ’s si-84 data subset\nand was used to recognize the audio in the corpus, using a bigram\nLM estimated on the text of the respective books. We computed the\nWord Error Rate (WER) of this automatic transcript relative to our\nreference transcripts obtained from the book texts.\nThe speakers in the corpus were ranked according to the WER of\nthe WSJ model’s transcripts, and were divided roughly in the middle,\nwith the lower-WER speakers designated as \"clean\" and the higher-WER speakers designated as \"other\".\n\n\nFor \"clean\", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360\nrespectively accounting for 100h and 360h of the training data.\nFor \"other\", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.", "### Licensing Information\n\n\nCC BY 4.0", "### Contributions\n\n\nThanks to @patrickvonplaten for adding this dataset." ]
996e72dea151ca0856d1d16efd71f560b18da817
# Dataset Card for XLEL-WD-Dictionary ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** <https://github.com/adithya7/xlel-wd> - **Repository:** <https://github.com/adithya7/xlel-wd> - **Paper:** <https://arxiv.org/abs/2204.06535> - **Leaderboard:** N/A - **Point of Contact:** Adithya Pratapa ### Dataset Summary XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles. ### Supported Tasks and Leaderboards This dictionary can be used as a part of the event linking task. ### Languages This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper. | Language | Code | Language | Code | Language | Code | Language | Code | | -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- | | Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg | | Bengali | bn | Catalan | ca | Czech | cs | Danish | da | | German | de | Greek | el | English | en | Spanish | es | | Persian | fa | Finnish | fi | French | fr | Hebrew | he | | Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it | | Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr | | Malay | ms | Dutch | nl | Norwegian | no | Polish | pl | | Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si | | Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv | | Swahili | sw | Tamil | ta | Telugu | te | Thai | th | | Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh | ## Dataset Structure ### Data Instances Each instance in the `label_dict.jsonl` file follows the below template, ```json { "label_id": "830917", "label_title": "2010 European Aquatics Championships", "label_desc": "The 2010 European Aquatics Championships were held from 4–15 August 2010 in Budapest and Balatonfüred, Hungary. It was the fourth time that the city of Budapest hosts this event after 1926, 1958 and 2006. Events in swimming, diving, synchronised swimming (synchro) and open water swimming were scheduled.", "label_lang": "en" } ``` ### Data Fields | Field | Meaning | | ----- | ------- | | `label_id` | Wikidata ID | | `label_title` | Title for the event, as collected from the corresponding Wikipedia article | | `label_desc` | Description for the event, as collected from the corresponding Wikipedia article | | `label_lang` | language used for the title and description | ### Data Splits This dictionary has a single split, `dictionary`. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles. ## Dataset Creation ### Curation Rationale This datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata. ### Source Data #### Initial Data Collection and Normalization A Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control. #### Who are the source language producers? The titles and descriptions for the events are written by Wikipedia contributors. ### Annotations #### Annotation process This dataset was automatically compiled from Wikidata. It was post-processed to improve data quality. #### Who are the annotators? Wikidata and Wikipedia contributors. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations This dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc., ## Additional Information ### Dataset Curators The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd). ### Licensing Information XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ```bib @article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, } ``` ### Contributions Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
adithya7/xlel_wd_dictionary
[ "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:af", "language:ar", "language:be", "language:bg", "language:bn", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:fa", "language:fi", "language:fr", "language:he", "language:hi", "language:hu", "language:id", "language:it", "language:ja", "language:ko", "language:ml", "language:mr", "language:ms", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:si", "language:sk", "language:sl", "language:sr", "language:sv", "language:sw", "language:ta", "language:te", "language:th", "language:tr", "language:uk", "language:vi", "language:zh", "license:cc-by-4.0", "arxiv:2204.06535", "region:us" ]
2022-04-22T01:36:27+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["af", "ar", "be", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "ko", "ml", "mr", "ms", "nl", "no", "pl", "pt", "ro", "ru", "si", "sk", "sl", "sr", "sv", "sw", "ta", "te", "th", "tr", "uk", "vi", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles."}
2022-07-01T16:30:21+00:00
[ "2204.06535" ]
[ "af", "ar", "be", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "ko", "ml", "mr", "ms", "nl", "no", "pl", "pt", "ro", "ru", "si", "sk", "sl", "sr", "sv", "sw", "ta", "te", "th", "tr", "uk", "vi", "zh" ]
TAGS #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Arabic #language-Belarusian #language-Bulgarian #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hebrew #language-Hindi #language-Hungarian #language-Indonesian #language-Italian #language-Japanese #language-Korean #language-Malayalam #language-Marathi #language-Malay (macrolanguage) #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Sinhala #language-Slovak #language-Slovenian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Turkish #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-4.0 #arxiv-2204.06535 #region-us
Dataset Card for XLEL-WD-Dictionary =================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: <URL * Repository: <URL * Paper: <URL * Leaderboard: N/A * Point of Contact: Adithya Pratapa ### Dataset Summary XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles. ### Supported Tasks and Leaderboards This dictionary can be used as a part of the event linking task. ### Languages This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper. Dataset Structure ----------------- ### Data Instances Each instance in the 'label\_dict.jsonl' file follows the below template, ### Data Fields ### Data Splits This dictionary has a single split, 'dictionary'. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles. Dataset Creation ---------------- ### Curation Rationale This datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata. ### Source Data #### Initial Data Collection and Normalization A Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control. #### Who are the source language producers? The titles and descriptions for the events are written by Wikipedia contributors. ### Annotations #### Annotation process This dataset was automatically compiled from Wikidata. It was post-processed to improve data quality. #### Who are the annotators? Wikidata and Wikipedia contributors. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations This dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc., Additional Information ---------------------- ### Dataset Curators The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at Github:xlel-wd. ### Licensing Information XLEL-WD dataset is released under CC-BY-4.0 license. ### Contributions Thanks to @adithya7 for adding this dataset.
[ "### Dataset Summary\n\n\nXLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles.", "### Supported Tasks and Leaderboards\n\n\nThis dictionary can be used as a part of the event linking task.", "### Languages\n\n\nThis dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in the 'label\\_dict.jsonl' file follows the below template,", "### Data Fields", "### Data Splits\n\n\nThis dictionary has a single split, 'dictionary'. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nA Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control.", "#### Who are the source language producers?\n\n\nThe titles and descriptions for the events are written by Wikipedia contributors.", "### Annotations", "#### Annotation process\n\n\nThis dataset was automatically compiled from Wikidata. It was post-processed to improve data quality.", "#### Who are the annotators?\n\n\nWikidata and Wikipedia contributors.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nThis dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc.,\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at Github:xlel-wd.", "### Licensing Information\n\n\nXLEL-WD dataset is released under CC-BY-4.0 license.", "### Contributions\n\n\nThanks to @adithya7 for adding this dataset." ]
[ "TAGS\n#annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Arabic #language-Belarusian #language-Bulgarian #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hebrew #language-Hindi #language-Hungarian #language-Indonesian #language-Italian #language-Japanese #language-Korean #language-Malayalam #language-Marathi #language-Malay (macrolanguage) #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Sinhala #language-Slovak #language-Slovenian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Turkish #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-4.0 #arxiv-2204.06535 #region-us \n", "### Dataset Summary\n\n\nXLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles.", "### Supported Tasks and Leaderboards\n\n\nThis dictionary can be used as a part of the event linking task.", "### Languages\n\n\nThis dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in the 'label\\_dict.jsonl' file follows the below template,", "### Data Fields", "### Data Splits\n\n\nThis dictionary has a single split, 'dictionary'. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nA Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control.", "#### Who are the source language producers?\n\n\nThe titles and descriptions for the events are written by Wikipedia contributors.", "### Annotations", "#### Annotation process\n\n\nThis dataset was automatically compiled from Wikidata. It was post-processed to improve data quality.", "#### Who are the annotators?\n\n\nWikidata and Wikipedia contributors.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nThis dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc.,\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at Github:xlel-wd.", "### Licensing Information\n\n\nXLEL-WD dataset is released under CC-BY-4.0 license.", "### Contributions\n\n\nThanks to @adithya7 for adding this dataset." ]
a6d542d37b24cc1f2536af5e4afb850b9641e3ff
# Dataset Card for XLEL-WD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** <https://github.com/adithya7/xlel-wd> - **Repository:** <https://github.com/adithya7/xlel-wd> - **Paper:** <https://arxiv.org/abs/2204.06535> - **Leaderboard:** N/A - **Point of Contact:** Adithya Pratapa ### Dataset Summary XLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata. The descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from [adithya7/xlel_wd_dictionary](https://huggingface.co/datasets/adithya7/xlel_wd_dictionary). ### Supported Tasks and Leaderboards This dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual. - Multilingual linking: mention and the event descriptions are in the same language. - Crosslingual linking: the event descriptions are only available in English. ### Languages This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper. | Language | Code | Language | Code | Language | Code | Language | Code | | -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- | | Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg | | Bengali | bn | Catalan | ca | Czech | cs | Danish | da | | German | de | Greek | el | English | en | Spanish | es | | Persian | fa | Finnish | fi | French | fr | Hebrew | he | | Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it | | Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr | | Malay | ms | Dutch | nl | Norwegian | no | Polish | pl | | Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si | | Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv | | Swahili | sw | Tamil | ta | Telugu | te | Thai | th | | Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh | ## Dataset Structure ### Data Instances Each instance in the `train.jsonl`, `dev.jsonl` and `test.jsonl` files follow the below template. ```json { "context_left": "Minibaev's first major international medal came in the men's synchronized 10 metre platform event at the ", "mention": "2010 European Championships", "context_right": ".", "context_lang": "en", "label_id": "830917", } ``` ### Data Fields | Field | Meaning | | ----- | ------- | | `mention` | text span of the mention | | `context_left` | left paragraph context from the document | | `context_right` | right paragraph context from the document | | `context_lang` | language of the context (and mention) | | `context_title` | document title of the mention (only Wikinews subset) | | `context_date` | document publication date of the mention (only Wikinews subset) | | `label_id` | Wikidata label ID for the event. E.g. 830917 refers to Q830917 from Wikidata. | ### Data Splits The Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup. | | Train | Dev | Test | Total | | ---- | :-----: | :---: | :----: | :-----: | | Events | 8653 | 1090 | 1204 | 10947 | | Event Sequences | 6758 | 844 | 846 | 8448 | | Mentions | 1.44M | 165K | 190K | 1.8M | | Languages | 44 | 44 | 44 | 44 | The Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation. | | (Cross-domain) Test | (Zero-shot) Test | | --- | :------------------: | :-----: | | Events | 802 | 149 | | Mentions | 2562 | 437 | | Languages | 27 | 21 | ## Dataset Creation ### Curation Rationale This dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles. ### Source Data #### Initial Data Collection and Normalization First, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items. #### Who are the source language producers? The documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages. ### Annotations #### Annotation process This dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality. #### Who are the annotators? The annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations XLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198). ## Additional Information ### Dataset Curators The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd). ### Licensing Information XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ```bib @article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, } ``` ### Contributions Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
adithya7/xlel_wd
[ "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:af", "language:ar", "language:be", "language:bg", "language:bn", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:fa", "language:fi", "language:fr", "language:he", "language:hi", "language:hu", "language:id", "language:it", "language:ja", "language:ko", "language:ml", "language:mr", "language:ms", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:si", "language:sk", "language:sl", "language:sr", "language:sv", "language:sw", "language:ta", "language:te", "language:th", "language:tr", "language:uk", "language:vi", "language:zh", "license:cc-by-4.0", "arxiv:2204.06535", "region:us" ]
2022-04-22T01:50:11+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["af", "ar", "be", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "ko", "ml", "mr", "ms", "nl", "no", "pl", "pt", "ro", "ru", "si", "sk", "sl", "sr", "sv", "sw", "ta", "te", "th", "tr", "uk", "vi", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "XLEL-WD is a multilingual event linking dataset. This dataset contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding Wikipedia articles."}
2022-07-13T06:46:57+00:00
[ "2204.06535" ]
[ "af", "ar", "be", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "ko", "ml", "mr", "ms", "nl", "no", "pl", "pt", "ro", "ru", "si", "sk", "sl", "sr", "sv", "sw", "ta", "te", "th", "tr", "uk", "vi", "zh" ]
TAGS #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Afrikaans #language-Arabic #language-Belarusian #language-Bulgarian #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hebrew #language-Hindi #language-Hungarian #language-Indonesian #language-Italian #language-Japanese #language-Korean #language-Malayalam #language-Marathi #language-Malay (macrolanguage) #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Sinhala #language-Slovak #language-Slovenian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Turkish #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-4.0 #arxiv-2204.06535 #region-us
Dataset Card for XLEL-WD ======================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: <URL * Repository: <URL * Paper: <URL * Leaderboard: N/A * Point of Contact: Adithya Pratapa ### Dataset Summary XLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata. The descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from adithya7/xlel\_wd\_dictionary. ### Supported Tasks and Leaderboards This dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual. * Multilingual linking: mention and the event descriptions are in the same language. * Crosslingual linking: the event descriptions are only available in English. ### Languages This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper. Dataset Structure ----------------- ### Data Instances Each instance in the 'URL', 'URL' and 'URL' files follow the below template. ### Data Fields ### Data Splits The Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup. The Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation. Dataset Creation ---------------- ### Curation Rationale This dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles. ### Source Data #### Initial Data Collection and Normalization First, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items. #### Who are the source language producers? The documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages. ### Annotations #### Annotation process This dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality. #### Who are the annotators? The annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations XLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198). Additional Information ---------------------- ### Dataset Curators The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at Github:xlel-wd. ### Licensing Information XLEL-WD dataset is released under CC-BY-4.0 license. ### Contributions Thanks to @adithya7 for adding this dataset.
[ "### Dataset Summary\n\n\nXLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata.\n\n\nThe descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from adithya7/xlel\\_wd\\_dictionary.", "### Supported Tasks and Leaderboards\n\n\nThis dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual.\n\n\n* Multilingual linking: mention and the event descriptions are in the same language.\n* Crosslingual linking: the event descriptions are only available in English.", "### Languages\n\n\nThis dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in the 'URL', 'URL' and 'URL' files follow the below template.", "### Data Fields", "### Data Splits\n\n\nThe Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup.\n\n\n\nThe Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFirst, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items.", "#### Who are the source language producers?\n\n\nThe documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages.", "### Annotations", "#### Annotation process\n\n\nThis dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality.", "#### Who are the annotators?\n\n\nThe annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nXLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198).\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at Github:xlel-wd.", "### Licensing Information\n\n\nXLEL-WD dataset is released under CC-BY-4.0 license.", "### Contributions\n\n\nThanks to @adithya7 for adding this dataset." ]
[ "TAGS\n#annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Afrikaans #language-Arabic #language-Belarusian #language-Bulgarian #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hebrew #language-Hindi #language-Hungarian #language-Indonesian #language-Italian #language-Japanese #language-Korean #language-Malayalam #language-Marathi #language-Malay (macrolanguage) #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Sinhala #language-Slovak #language-Slovenian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Turkish #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-4.0 #arxiv-2204.06535 #region-us \n", "### Dataset Summary\n\n\nXLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata.\n\n\nThe descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from adithya7/xlel\\_wd\\_dictionary.", "### Supported Tasks and Leaderboards\n\n\nThis dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual.\n\n\n* Multilingual linking: mention and the event descriptions are in the same language.\n* Crosslingual linking: the event descriptions are only available in English.", "### Languages\n\n\nThis dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in the 'URL', 'URL' and 'URL' files follow the below template.", "### Data Fields", "### Data Splits\n\n\nThe Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup.\n\n\n\nThe Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFirst, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items.", "#### Who are the source language producers?\n\n\nThe documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages.", "### Annotations", "#### Annotation process\n\n\nThis dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality.", "#### Who are the annotators?\n\n\nThe annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nXLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198).\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at Github:xlel-wd.", "### Licensing Information\n\n\nXLEL-WD dataset is released under CC-BY-4.0 license.", "### Contributions\n\n\nThanks to @adithya7 for adding this dataset." ]
a156ba94142aa70a7ed31153a815f3990d87ff03
# Dataset Card for [FrozenLake-v1] with slippery = True
AntoineLB/FrozenLakeFrozen
[ "region:us" ]
2022-04-22T06:06:34+00:00
{}
2022-04-22T06:57:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for [FrozenLake-v1] with slippery = True
[ "# Dataset Card for [FrozenLake-v1] with slippery = True" ]
[ "TAGS\n#region-us \n", "# Dataset Card for [FrozenLake-v1] with slippery = True" ]
b9dee7e7cf675ed6f2b97378b8de74920162b617
## Overview Original dataset [here](https://github.com/felipessalvatore/NLI_datasets). Below the original description reported for convenience. ```latex @MISC{Fracas96, author = {{The Fracas Consortium} and Robin Cooper and Dick Crouch and Jan Van Eijck and Chris Fox and Josef Van Genabith and Jan Jaspars and Hans Kamp and David Milward and Manfred Pinkal and Massimo Poesio and Steve Pulman and Ted Briscoe and Holger Maier and Karsten Konrad}, title = {Using the Framework}, year = {1996} } ``` Adapted from [https://nlp.stanford.edu/~wcmac/downloads/fracas.xml](https://nlp.stanford.edu/~wcmac/downloads/fracas.xml). We took `P1, ..., Pn` as premise and H as hypothesis. Labels have been mapped as follows `{'yes': "entailment", 'no': 'contradiction', 'undef': "neutral", 'unknown': "neutral"}`. And we randomly split 80/20 for train/dev. ## Dataset curation One hypothesis in the dev set and three hypotheses in the train set are empty and have been filled in with the empty string `""`. Labels are encoded with custom NLI mapping, that is ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` ## Code to create the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset from pathlib import Path # load datasets path = Path("<path to folder>/nli_datasets") datasets = {} for dataset_path in path.iterdir(): datasets[dataset_path.name] = {} for name in dataset_path.iterdir(): df = pd.read_csv(name) datasets[dataset_path.name][name.name.split(".")[0]] = df ds = {} for name, df_ in datasets["fracas"].items(): df = df_.copy() assert df["label"].isna().sum() == 0 # fill-in empty hypothesis df = df.fillna("") # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) ds[name] = Dataset.from_pandas(df, features=features) dataset = DatasetDict(ds) dataset.push_to_hub("fracas", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["label", "premise", "hypothesis"], how="inner", ).shape[0], ) #> train - dev: 0 ```
pietrolesci/fracas
[ "region:us" ]
2022-04-22T07:35:48+00:00
{}
2022-04-25T07:40:07+00:00
[]
[]
TAGS #region-us
## Overview Original dataset here. Below the original description reported for convenience. Adapted from URL We took 'P1, ..., Pn' as premise and H as hypothesis. Labels have been mapped as follows '{'yes': "entailment", 'no': 'contradiction', 'undef': "neutral", 'unknown': "neutral"}'. And we randomly split 80/20 for train/dev. ## Dataset curation One hypothesis in the dev set and three hypotheses in the train set are empty and have been filled in with the empty string '""'. Labels are encoded with custom NLI mapping, that is ## Code to create the dataset
[ "## Overview\nOriginal dataset here.\n\nBelow the original description reported for convenience.\n\n\nAdapted from URL We took 'P1, ..., Pn' as premise and H as hypothesis. Labels have been mapped as follows '{'yes': \"entailment\", 'no': 'contradiction', 'undef': \"neutral\", 'unknown': \"neutral\"}'. And we randomly split 80/20 for train/dev.", "## Dataset curation\nOne hypothesis in the dev set and three hypotheses in the train set are empty and have been\nfilled in with the empty string '\"\"'. Labels are encoded with custom NLI mapping, that is", "## Code to create the dataset" ]
[ "TAGS\n#region-us \n", "## Overview\nOriginal dataset here.\n\nBelow the original description reported for convenience.\n\n\nAdapted from URL We took 'P1, ..., Pn' as premise and H as hypothesis. Labels have been mapped as follows '{'yes': \"entailment\", 'no': 'contradiction', 'undef': \"neutral\", 'unknown': \"neutral\"}'. And we randomly split 80/20 for train/dev.", "## Dataset curation\nOne hypothesis in the dev set and three hypotheses in the train set are empty and have been\nfilled in with the empty string '\"\"'. Labels are encoded with custom NLI mapping, that is", "## Code to create the dataset" ]
af87ac826a01c8ce7aaed0015c8710cee48007bc
licenses: - cc-by-2-0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: tatoeba pretty_name: Tatoeba --- # Dataset Card for Tatoeba ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/Tatoeba.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary Tatoeba is a collection of sentences and translations. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php E.g. `dataset = load_dataset("tatoeba", lang1="en", lang2="he")` The default date is v2021-07-22, but you can also change the date with `dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [@loretoparisi](https://github.com/loretoparisi)
loretoparisi/tatoeba-sentences
[ "region:us" ]
2022-04-22T07:48:18+00:00
{"license": "cc-by-2-0"}
2022-04-27T16:26:31+00:00
[]
[]
TAGS #region-us
licenses: - cc-by-2-0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: tatoeba pretty_name: Tatoeba --- # Dataset Card for Tatoeba ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: None - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary Tatoeba is a collection of sentences and translations. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: URL E.g. 'dataset = load_dataset("tatoeba", lang1="en", lang2="he")' The default date is v2021-07-22, but you can also change the date with 'dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")' ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions @loretoparisi
[ "# Dataset Card for Tatoeba", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\nTatoeba is a collection of sentences and translations.\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n'dataset = load_dataset(\"tatoeba\", lang1=\"en\", lang2=\"he\")'\nThe default date is v2021-07-22, but you can also change the date with\n'dataset = load_dataset(\"tatoeba\", lang1=\"en\", lang2=\"he\", date=\"v2020-11-09\")'", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n@loretoparisi" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Tatoeba", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\nTatoeba is a collection of sentences and translations.\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n'dataset = load_dataset(\"tatoeba\", lang1=\"en\", lang2=\"he\")'\nThe default date is v2021-07-22, but you can also change the date with\n'dataset = load_dataset(\"tatoeba\", lang1=\"en\", lang2=\"he\", date=\"v2020-11-09\")'", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n@loretoparisi" ]
36dbc520e45ddad0b14c6526ebbae8ed01bc5d7c
## Overview Original dataset is available on the HuggingFace Hub [here](https://huggingface.co/datasets/scitail). ## Dataset curation This is the same as the `snli_format` split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc). The only differences are the following: - selecting only the columns `["sentence1", "sentence2", "gold_label", "label"]` - renaming columns with the following mapping `{"sentence1": "premise", "sentence2": "hypothesis"}` - creating a new column "label" from "gold_label" with the following mapping `{"entailment": "entailment", "neutral": "not_entailment"}` - encoding labels with the following mapping `{"not_entailment": 0, "entailment": 1}` Note that there are 10 overlapping instances (as found by merging on columns "label", "premise", and "hypothesis") between `train` and `test` splits. ## Code to create the dataset ```python from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset # load datasets from the Hub dd = load_dataset("scitail", "snli_format") ds = {} for name, df_ in dd.items(): df = df_.to_pandas() # select important columns df = df[["sentence1", "sentence2", "gold_label"]] # rename columns df = df.rename(columns={"sentence1": "premise", "sentence2": "hypothesis"}) # encode labels df["label"] = df["gold_label"].map({"entailment": "entailment", "neutral": "not_entailment"}) df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1}) # cast to dataset features = Features({ "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]), }) ds[name] = Dataset.from_pandas(df, features=features) dataset = DatasetDict(ds) dataset.push_to_hub("scitail", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(dataset.keys(), 2): print( f"{i} - {j}: ", pd.merge( dataset[i].to_pandas(), dataset[j].to_pandas(), on=["label", "premise", "hypothesis"], how="inner", ).shape[0], ) #> train - test: 10 #> train - validation: 0 #> test - validation: 0 ```
pietrolesci/scitail
[ "region:us" ]
2022-04-22T08:06:21+00:00
{}
2022-04-25T09:40:47+00:00
[]
[]
TAGS #region-us
## Overview Original dataset is available on the HuggingFace Hub here. ## Dataset curation This is the same as the 'snli_format' split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc). The only differences are the following: - selecting only the columns '["sentence1", "sentence2", "gold_label", "label"]' - renaming columns with the following mapping '{"sentence1": "premise", "sentence2": "hypothesis"}' - creating a new column "label" from "gold_label" with the following mapping '{"entailment": "entailment", "neutral": "not_entailment"}' - encoding labels with the following mapping '{"not_entailment": 0, "entailment": 1}' Note that there are 10 overlapping instances (as found by merging on columns "label", "premise", and "hypothesis") between 'train' and 'test' splits. ## Code to create the dataset
[ "## Overview\nOriginal dataset is available on the HuggingFace Hub here.", "## Dataset curation\nThis is the same as the 'snli_format' split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc).\nThe only differences are the following:\n\n- selecting only the columns '[\"sentence1\", \"sentence2\", \"gold_label\", \"label\"]'\n- renaming columns with the following mapping '{\"sentence1\": \"premise\", \"sentence2\": \"hypothesis\"}'\n- creating a new column \"label\" from \"gold_label\" with the following mapping '{\"entailment\": \"entailment\", \"neutral\": \"not_entailment\"}'\n- encoding labels with the following mapping '{\"not_entailment\": 0, \"entailment\": 1}'\n\nNote that there are 10 overlapping instances (as found by merging on columns \"label\", \"premise\", and \"hypothesis\") between\n'train' and 'test' splits.", "## Code to create the dataset" ]
[ "TAGS\n#region-us \n", "## Overview\nOriginal dataset is available on the HuggingFace Hub here.", "## Dataset curation\nThis is the same as the 'snli_format' split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc).\nThe only differences are the following:\n\n- selecting only the columns '[\"sentence1\", \"sentence2\", \"gold_label\", \"label\"]'\n- renaming columns with the following mapping '{\"sentence1\": \"premise\", \"sentence2\": \"hypothesis\"}'\n- creating a new column \"label\" from \"gold_label\" with the following mapping '{\"entailment\": \"entailment\", \"neutral\": \"not_entailment\"}'\n- encoding labels with the following mapping '{\"not_entailment\": 0, \"entailment\": 1}'\n\nNote that there are 10 overlapping instances (as found by merging on columns \"label\", \"premise\", and \"hypothesis\") between\n'train' and 'test' splits.", "## Code to create the dataset" ]
2dceb8142327bf9eac3ff8927e2f39533a4afc8e
# TermITH-Eval Benchmark Dataset for Keyphrase Generation ## About TermITH-Eval is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 400 abstracts of scientific papers in French collected from the FRANCIS and PASCAL databases of the French [Institute for Scientific and Technical Information (Inist)](https://www.inist.fr/). Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries). Details about the dataset can be found in the original paper [(Bougouin et al., 2016)][bougouin-2016]. Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. ## Content and statistics The dataset contains the following test split: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- |------------:|-----------:|-------------:|----------:|------------:|--------:|---------:| | Test | 399 | 156.9 | 11.81 | 40.60 | 7.32 | 19.28 | 32.80 | The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. - **category**: category of the document, i.e. chimie (chemistry), archeologie (archeology), linguistique (linguistics) and scienceInfo (information sciences). ## References - (Bougouin et al., 2016) Adrien Bougouin, Sabine Barreaux, Laurent Romary, Florian Boudin, and Béatrice Daille. 2016. [TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation][bougouin-2016]. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1924–1927, Portorož, Slovenia. European Language Resources Association (ELRA).Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [bougouin-2016]: https://aclanthology.org/L16-1304/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
taln-ls2n/termith-eval
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:multilingual", "size_categories:n<1K", "language:fr", "license:cc-by-4.0", "region:us" ]
2022-04-22T08:09:23+00:00
{"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["fr"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "TermITH-Eval"}
2022-09-23T06:49:04+00:00
[]
[ "fr" ]
TAGS #task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-multilingual #size_categories-n<1K #language-French #license-cc-by-4.0 #region-us
TermITH-Eval Benchmark Dataset for Keyphrase Generation ======================================================= About ----- TermITH-Eval is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 400 abstracts of scientific papers in French collected from the FRANCIS and PASCAL databases of the French Institute for Scientific and Technical Information (Inist). Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries). Details about the dataset can be found in the original paper [(Bougouin et al., 2016)](URL). Reference (indexer-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in [(Boudin and Gallina, 2021)](URL). Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using 'spacy' ('fr\_core\_news\_sm' model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in 'nltk') is applied before reference keyphrases are matched against the source text. Details about the process can be found in 'URL'. Content and statistics ---------------------- The dataset contains the following test split: The following data fields are available : * id: unique identifier of the document. * title: title of the document. * abstract: abstract of the document. * keyphrases: list of reference keyphrases. * prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases. * category: category of the document, i.e. chimie (chemistry), archeologie (archeology), linguistique (linguistics) and scienceInfo (information sciences). References ---------- * (Bougouin et al., 2016) Adrien Bougouin, Sabine Barreaux, Laurent Romary, Florian Boudin, and Béatrice Daille. 2016. [TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation](URL). In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1924–1927, Portorož, Slovenia. European Language Resources Association (ELRA).Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing. * (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](URL). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[]
[ "TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-multilingual #size_categories-n<1K #language-French #license-cc-by-4.0 #region-us \n" ]
8a11d2b48a0276e70d77b4eb21e3078415a10822
ROOTS Subset: roots_ar_uncorpus # uncorpus - Dataset uid: `uncorpus` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_ar_uncorpus
[ "language:ar", "license:cc-by-4.0", "region:us" ]
2022-04-22T09:23:52+00:00
{"language": "ar", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}}
2022-12-12T10:59:32+00:00
[]
[ "ar" ]
TAGS #language-Arabic #license-cc-by-4.0 #region-us
ROOTS Subset: roots_ar_uncorpus # uncorpus - Dataset uid: 'uncorpus' ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
[ "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
[ "TAGS\n#language-Arabic #license-cc-by-4.0 #region-us \n", "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
32c8b25b9390fcbf17012195d7480d1b91e7f751
ROOTS Subset: roots_en_uncorpus # uncorpus - Dataset uid: `uncorpus` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_en_uncorpus
[ "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-22T09:26:12+00:00
{"language": "en", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}}
2022-12-12T10:59:37+00:00
[]
[ "en" ]
TAGS #language-English #license-cc-by-4.0 #region-us
ROOTS Subset: roots_en_uncorpus # uncorpus - Dataset uid: 'uncorpus' ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
[ "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
[ "TAGS\n#language-English #license-cc-by-4.0 #region-us \n", "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
3c479087d05129205cecc815ce199ce803c66149
ROOTS Subset: roots_es_uncorpus # uncorpus - Dataset uid: `uncorpus` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_es_uncorpus
[ "language:es", "license:cc-by-4.0", "region:us" ]
2022-04-22T09:28:27+00:00
{"language": "es", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}}
2022-12-12T10:59:42+00:00
[]
[ "es" ]
TAGS #language-Spanish #license-cc-by-4.0 #region-us
ROOTS Subset: roots_es_uncorpus # uncorpus - Dataset uid: 'uncorpus' ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
[ "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
[ "TAGS\n#language-Spanish #license-cc-by-4.0 #region-us \n", "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
020c5babdd484dc981b357a153db664beb1fdbba
ROOTS Subset: roots_fr_uncorpus # uncorpus - Dataset uid: `uncorpus` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_fr_uncorpus
[ "language:fr", "license:cc-by-4.0", "region:us" ]
2022-04-22T09:30:47+00:00
{"language": "fr", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}}
2022-12-12T10:29:02+00:00
[]
[ "fr" ]
TAGS #language-French #license-cc-by-4.0 #region-us
ROOTS Subset: roots_fr_uncorpus # uncorpus - Dataset uid: 'uncorpus' ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
[ "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
[ "TAGS\n#language-French #license-cc-by-4.0 #region-us \n", "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
f7782c950faee7385b25eef0bb0499009e6df956
ROOTS Subset: roots_zh_uncorpus # uncorpus - Dataset uid: `uncorpus` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_zh_uncorpus
[ "language:zh", "license:cc-by-4.0", "region:us" ]
2022-04-22T09:33:31+00:00
{"language": "zh", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}}
2022-12-12T10:59:49+00:00
[]
[ "zh" ]
TAGS #language-Chinese #license-cc-by-4.0 #region-us
ROOTS Subset: roots_zh_uncorpus # uncorpus - Dataset uid: 'uncorpus' ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
[ "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
[ "TAGS\n#language-Chinese #license-cc-by-4.0 #region-us \n", "# uncorpus\n\n- Dataset uid: 'uncorpus'", "### Description", "### Homepage", "### Licensing", "### Speaker Locations", "### Sizes\n\n- 2.8023 % of total\n- 10.7390 % of ar\n- 5.7970 % of fr\n- 9.7477 % of es\n- 2.0417 % of en\n- 1.2540 % of zh", "### BigScience processing steps", "#### Filters applied to: ar\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_300", "#### Filters applied to: fr\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: es\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: en\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024", "#### Filters applied to: zh\n\n- dedup_document\n- filter_remove_empty_docs\n- filter_small_docs_bytes_1024" ]
3671c49f3c072e6ec8047f15926db10e02de487c
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p> # Dataset Card for HiNER-original [![Twitter Follow](https://img.shields.io/twitter/follow/cfiltnlp?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/cfiltnlp) [![Twitter Follow](https://img.shields.io/twitter/follow/PeopleCentredAI?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/PeopleCentredAI) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/cfiltnlp/HiNER - **Repository:** https://github.com/cfiltnlp/HiNER - **Paper:** https://arxiv.org/abs/2204.13743 - **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-collapsed - **Point of Contact:** Rudra Murthy V ### Dataset Summary This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy. **Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset. ### Supported Tasks and Leaderboards Named Entity Recognition ### Languages Hindi ## Dataset Structure ### Data Instances {'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]} ### Data Fields - `id`: The ID value of the data point. - `tokens`: Raw tokens in the dataset. - `ner_tags`: the NER tags for this dataset. ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | original | 76025 | 10861 | 21722| | collapsed | 76025 | 10861 | 21722| ## About This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743). ### Recent Updates * Version 0.0.5: HiNER initial release ## Usage You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip: ```code pip install datasets ``` To use the original dataset with all the tags, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('cfilt/HiNER-original') ``` To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('cfilt/HiNER-collapsed') ``` However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder. ## Model(s) Our best performing models are hosted on the HuggingFace models repository: 1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large) 2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large) ## Dataset Creation ### Curation Rationale HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing. ### Source Data #### Initial Data Collection and Normalization HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi #### Who are the source language producers? Various Government of India webpages ### Annotations #### Annotation process This dataset was manually annotated by a single annotator of a long span of time. #### Who are the annotators? Pallab Bhattacharjee ### Personal and Sensitive Information We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data. ### Discussion of Biases Any biases contained in the data released by the Indian government are bound to be present in our data. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Pallab Bhattacharjee ### Licensing Information CC-BY-SA 4.0 ### Citation Information ```latex @misc{https://doi.org/10.48550/arxiv.2204.13743, doi = {10.48550/ARXIV.2204.13743}, url = {https://arxiv.org/abs/2204.13743}, author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {HiNER: A Large Hindi Named Entity Recognition Dataset}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
cfilt/HiNER-collapsed
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:hi", "license:cc-by-sa-4.0", "arxiv:2204.13743", "region:us" ]
2022-04-22T09:51:15+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["hi"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "hiner-collapsed-1", "pretty_name": "HiNER - Large Hindi Named Entity Recognition dataset"}
2023-03-07T16:32:27+00:00
[ "2204.13743" ]
[ "hi" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Hindi #license-cc-by-sa-4.0 #arxiv-2204.13743 #region-us
![](URL alt=) Dataset Card for HiNER-original =============================== ![Twitter Follow](URL ![Twitter Follow](URL Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: Rudra Murthy V ### Dataset Summary This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy. Note: The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset. ### Supported Tasks and Leaderboards Named Entity Recognition ### Languages Hindi Dataset Structure ----------------- ### Data Instances {'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner\_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]} ### Data Fields * 'id': The ID value of the data point. * 'tokens': Raw tokens in the dataset. * 'ner\_tags': the NER tags for this dataset. ### Data Splits About ----- This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available here. ### Recent Updates * Version 0.0.5: HiNER initial release Usage ----- You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip: To use the original dataset with all the tags, please use: To use the collapsed dataset with only PER, LOC, and ORG tags, please use: However, the CoNLL format dataset files can also be found on this Git repository under the data folder. Model(s) -------- Our best performing models are hosted on the HuggingFace models repository: 1. HiNER-Collapsed-XLM-R 2. HiNER-Original-XLM-R Dataset Creation ---------------- ### Curation Rationale HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing. ### Source Data #### Initial Data Collection and Normalization HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi #### Who are the source language producers? Various Government of India webpages ### Annotations #### Annotation process This dataset was manually annotated by a single annotator of a long span of time. #### Who are the annotators? Pallab Bhattacharjee ### Personal and Sensitive Information We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data. ### Discussion of Biases Any biases contained in the data released by the Indian government are bound to be present in our data. ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Pallab Bhattacharjee ### Licensing Information CC-BY-SA 4.0
[ "### Dataset Summary\n\n\nThis dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.\n\n\nNote: The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.", "### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition", "### Languages\n\n\nHindi\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner\\_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}", "### Data Fields\n\n\n* 'id': The ID value of the data point.\n* 'tokens': Raw tokens in the dataset.\n* 'ner\\_tags': the NER tags for this dataset.", "### Data Splits\n\n\n\nAbout\n-----\n\n\nThis repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available here.", "### Recent Updates\n\n\n* Version 0.0.5: HiNER initial release\n\n\nUsage\n-----\n\n\nYou should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:\n\n\nTo use the original dataset with all the tags, please use: \n\n\n\nTo use the collapsed dataset with only PER, LOC, and ORG tags, please use: \n\n\n\nHowever, the CoNLL format dataset files can also be found on this Git repository under the data folder.\n\n\nModel(s)\n--------\n\n\nOur best performing models are hosted on the HuggingFace models repository:\n\n\n1. HiNER-Collapsed-XLM-R\n2. HiNER-Original-XLM-R\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi", "#### Who are the source language producers?\n\n\nVarious Government of India webpages", "### Annotations", "#### Annotation process\n\n\nThis dataset was manually annotated by a single annotator of a long span of time.", "#### Who are the annotators?\n\n\nPallab Bhattacharjee", "### Personal and Sensitive Information\n\n\nWe ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.", "### Discussion of Biases\n\n\nAny biases contained in the data released by the Indian government are bound to be present in our data.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nPallab Bhattacharjee", "### Licensing Information\n\n\nCC-BY-SA 4.0" ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Hindi #license-cc-by-sa-4.0 #arxiv-2204.13743 #region-us \n", "### Dataset Summary\n\n\nThis dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.\n\n\nNote: The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.", "### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition", "### Languages\n\n\nHindi\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner\\_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}", "### Data Fields\n\n\n* 'id': The ID value of the data point.\n* 'tokens': Raw tokens in the dataset.\n* 'ner\\_tags': the NER tags for this dataset.", "### Data Splits\n\n\n\nAbout\n-----\n\n\nThis repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available here.", "### Recent Updates\n\n\n* Version 0.0.5: HiNER initial release\n\n\nUsage\n-----\n\n\nYou should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:\n\n\nTo use the original dataset with all the tags, please use: \n\n\n\nTo use the collapsed dataset with only PER, LOC, and ORG tags, please use: \n\n\n\nHowever, the CoNLL format dataset files can also be found on this Git repository under the data folder.\n\n\nModel(s)\n--------\n\n\nOur best performing models are hosted on the HuggingFace models repository:\n\n\n1. HiNER-Collapsed-XLM-R\n2. HiNER-Original-XLM-R\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi", "#### Who are the source language producers?\n\n\nVarious Government of India webpages", "### Annotations", "#### Annotation process\n\n\nThis dataset was manually annotated by a single annotator of a long span of time.", "#### Who are the annotators?\n\n\nPallab Bhattacharjee", "### Personal and Sensitive Information\n\n\nWe ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.", "### Discussion of Biases\n\n\nAny biases contained in the data released by the Indian government are bound to be present in our data.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nPallab Bhattacharjee", "### Licensing Information\n\n\nCC-BY-SA 4.0" ]
c98da16de9bf6c8c09143b61be6079f85bfd1373
# Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation ## About SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 244 **full-text** scientific papers collected from the [ACM Digital Library](https://dl.acm.org/). Keyphrases were annotated by readers and combined with those provided by the authors. Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)][kim-2010]. This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016] and provides four increasingly sophisticated levels of document preprocessing: * `lvl-1`: default text files provided by the SemEval-2010 organizers. * `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library. We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505. We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion. We finally apply a systematic dehyphenation at line breaks.s * `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion. * `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique. We keep the title and abstract and select the most content bearing sentences from the remaining contents. Titles and abstracts, collected from the [SciCorefCorpus](https://github.com/melsk125/SciCorefCorpus), are also provided. Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)][chaimongkol-2014]. Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition). They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. The <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1). ## Content and statistics The dataset is divided into the following two splits: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:| | Train | 144 | 184.6 | 15.44 | 42.16 | 7.36 | 26.85 | 23.63 | | Test | 100 | 203.1 | 14.66 | 40.11 | 8.34 | 27.12 | 24.43 | Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers. The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **lvl-1**: content of the document with no text processing. - **lvl-2**: content of the document retrieved from original PDF files and cleaned up. - **lvl-3**: content of the document further abridged to relevant sections. - **lvl-4**: content of the document further abridged using an unsupervised summarization technique. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. ## References - (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. [SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010]. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics. - (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014. [Corpus for Coreference Resolution on Scientific Papers][chaimongkol-2014]. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA). - (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016. [How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016]. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [kim-2010]: https://aclanthology.org/S10-1004/ [chaimongkol-2014]: https://aclanthology.org/L14-1259/ [boudin-2016]: https://aclanthology.org/W16-3917/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
taln-ls2n/semeval-2010-pre
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:cc-by-4.0", "region:us" ]
2022-04-22T11:10:54+00:00
{"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "Preprocessed SemEval-2010 Benchmark dataset"}
2022-09-23T06:37:43+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us
Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation ==================================================================== About ----- SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 244 full-text scientific papers collected from the ACM Digital Library. Keyphrases were annotated by readers and combined with those provided by the authors. Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)](URL). This version of the dataset was produced by [(Boudin et al., 2016)](URL) and provides four increasingly sophisticated levels of document preprocessing: * 'lvl-1': default text files provided by the SemEval-2010 organizers. * 'lvl-2': for each file, we manually retrieved the original PDF file from the ACM Digital Library. We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505. We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion. We finally apply a systematic dehyphenation at line breaks.s * 'lvl-3': we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion. * 'lvl-4': we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique. We keep the title and abstract and select the most content bearing sentences from the remaining contents. Titles and abstracts, collected from the SciCorefCorpus, are also provided. Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)](URL). Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition). They are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in [(Boudin and Gallina, 2021)](URL). Text pre-processing (tokenization) is carried out using 'spacy' ('en\_core\_web\_sm' model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in 'nltk') is applied before reference keyphrases are matched against the source text. Details about the process can be found in 'URL'. The Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1). Content and statistics ---------------------- The dataset is divided into the following two splits: Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers. The following data fields are available : * id: unique identifier of the document. * title: title of the document. * abstract: abstract of the document. * lvl-1: content of the document with no text processing. * lvl-2: content of the document retrieved from original PDF files and cleaned up. * lvl-3: content of the document further abridged to relevant sections. * lvl-4: content of the document further abridged using an unsupervised summarization technique. * keyphrases: list of reference keyphrases. * prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases. References ---------- * (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. [SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles](URL). In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics. * (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014. [Corpus for Coreference Resolution on Scientific Papers](URL). In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA). * (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016. [How Document Pre-processing affects Keyphrase Extraction Performance](URL). In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee. * (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](URL). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[]
[ "TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us \n" ]
5bd658aa3bfea14d2c051f1c7dd34b456bbda4a0
## Overview Original dataset [here](https://github.com/aylai/MultiPremiseEntailment). ## Dataset curation Same data and splits as the original. The following columns have been added: - `premise`: concatenation of `premise1`, `premise2`, `premise3`, and `premise4` - `label`: encoded `gold_label` with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}` ## Code to create the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict from pathlib import Path # read data path = Path("<path to files>") datasets = {} for dataset_path in path.rglob("*.txt"): df = pd.read_csv(dataset_path, sep="\t") datasets[dataset_path.name.split("_")[1].split(".")[0]] = df ds = {} for name, df_ in datasets.items(): df = df_.copy() # fix parsing error for dev split if name == "dev": # fix parsing error df.loc[df["contradiction_judgments"] == "3 contradiction", "contradiction_judgments"] = 3 df.loc[df["gold_label"].isna(), "gold_label"] = "contradiction" # check no nan assert df.isna().sum().sum() == 0 # fix dtypes for col in ("entailment_judgments", "neutral_judgments", "contradiction_judgments"): df[col] = df[col].astype(int) # fix premise column for i in range(1, 4 + 1): df[f"premise{i}"] = df[f"premise{i}"].str.split("/", expand=True)[1] df["premise"] = df[[f"premise{i}" for i in range(1, 4 + 1)]].agg(" ".join, axis=1) # encode labels df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "premise1": Value(dtype="string", id=None), "premise2": Value(dtype="string", id=None), "premise3": Value(dtype="string", id=None), "premise4": Value(dtype="string", id=None), "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "entailment_judgments": Value(dtype="int32"), "neutral_judgments": Value(dtype="int32"), "contradiction_judgments": Value(dtype="int32"), "gold_label": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) ds[name] = Dataset.from_pandas(df, features=features) # push to hub ds = DatasetDict(ds) ds.push_to_hub("mpe", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> dev - test: 0 #> dev - train: 0 #> test - train: 0 ```
pietrolesci/mpe
[ "region:us" ]
2022-04-22T11:38:29+00:00
{}
2022-04-25T08:00:18+00:00
[]
[]
TAGS #region-us
## Overview Original dataset here. ## Dataset curation Same data and splits as the original. The following columns have been added: - 'premise': concatenation of 'premise1', 'premise2', 'premise3', and 'premise4' - 'label': encoded 'gold_label' with the following mapping '{"entailment": 0, "neutral": 1, "contradiction": 2}' ## Code to create the dataset
[ "## Overview\n\nOriginal dataset here.", "## Dataset curation\nSame data and splits as the original. The following columns have been added:\n\n- 'premise': concatenation of 'premise1', 'premise2', 'premise3', and 'premise4'\n- 'label': encoded 'gold_label' with the following mapping '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'", "## Code to create the dataset" ]
[ "TAGS\n#region-us \n", "## Overview\n\nOriginal dataset here.", "## Dataset curation\nSame data and splits as the original. The following columns have been added:\n\n- 'premise': concatenation of 'premise1', 'premise2', 'premise3', and 'premise4'\n- 'label': encoded 'gold_label' with the following mapping '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'", "## Code to create the dataset" ]
a5bdde974239556a20e6fc1624c2e32ee20b0c6a
## Overview Original data available [here](http://www.seas.upenn.edu/~nlp/resources/AN-composition.tgz). ## Dataset curation `premise` and `hypothesis` columns have been cleaned following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L51-L52), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L31-L32)), that is - remove HTML tags `<b>`, `<u>`, `</b>`, `</u>` - normalize repeated white spaces - strip `mean_human_score` has been transformed into class labels following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L20-L35), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L6-L17)), that is - for test set: `mean_human_score <= 3 -> "not-entailed"` and `mean_human_score >= 4 -> "entailed"` (anything between 3 and 4 has been removed) - for all other splits: `mean_human_score < 3.5 -> "not-entailed"` else `"entailed"` more details below. ## Code to generate the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict def convert_label(score, is_test): if is_test: if score <= 3: return "not-entailed" elif score >= 4: return "entailed" return "REMOVE" if score < 3.5: return "not-entailed" return "entailed" ds = {} for split in ("dev", "test", "train"): # read data df = pd.read_csv(f"<path to folder>/AN-composition/addone-entailment/splits/data.{split}", sep="\t", header=None) df.columns = ["mean_human_score", "binary_label", "sentence_id", "adjective", "noun", "premise", "hypothesis"] # clean text from html tags and useless spaces for col in ("premise", "hypothesis"): df[col] = ( df[col] .str.replace("(<b>)|(<u>)|(</b>)|(</u>)", " ", regex=True) .str.replace(" {2,}", " ", regex=True) .str.strip() ) # encode labels if split == "test": df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, True)) df = df.loc[df["label"] != "REMOVE"] else: df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, False)) assert df["label"].isna().sum() == 0 df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1}) # cast to dataset features = Features({ "mean_human_score": Value(dtype="float32"), "binary_label": Value(dtype="string"), "sentence_id": Value(dtype="string"), "adjective": Value(dtype="string"), "noun": Value(dtype="string"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]), }) ds[split] = Dataset.from_pandas(df, features=features) ds = DatasetDict(ds) ds.push_to_hub("add_one_rte", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> dev - test: 0 #> dev - train: 0 #> test - train: 0 ```
pietrolesci/add_one_rte
[ "region:us" ]
2022-04-22T12:56:41+00:00
{}
2022-04-25T07:48:42+00:00
[]
[]
TAGS #region-us
## Overview Original data available here. ## Dataset curation 'premise' and 'hypothesis' columns have been cleaned following common practices (1, 2), that is - remove HTML tags '<b>', '<u>', '</b>', '</u>' - normalize repeated white spaces - strip 'mean_human_score' has been transformed into class labels following common practices (1, 2), that is - for test set: 'mean_human_score <= 3 -> "not-entailed"' and 'mean_human_score >= 4 -> "entailed"' (anything between 3 and 4 has been removed) - for all other splits: 'mean_human_score < 3.5 -> "not-entailed"' else '"entailed"' more details below. ## Code to generate the dataset
[ "## Overview\n\nOriginal data available here.", "## Dataset curation\n\n'premise' and 'hypothesis' columns have been cleaned following common practices (1, 2), that is\n\n- remove HTML tags '<b>', '<u>', '</b>', '</u>'\n- normalize repeated white spaces\n- strip\n\n'mean_human_score' has been transformed into class labels following common practices (1, 2), that is\n\n- for test set: 'mean_human_score <= 3 -> \"not-entailed\"' and 'mean_human_score >= 4 -> \"entailed\"' (anything between 3 and 4 has been removed)\n- for all other splits: 'mean_human_score < 3.5 -> \"not-entailed\"' else '\"entailed\"'\n\nmore details below.", "## Code to generate the dataset" ]
[ "TAGS\n#region-us \n", "## Overview\n\nOriginal data available here.", "## Dataset curation\n\n'premise' and 'hypothesis' columns have been cleaned following common practices (1, 2), that is\n\n- remove HTML tags '<b>', '<u>', '</b>', '</u>'\n- normalize repeated white spaces\n- strip\n\n'mean_human_score' has been transformed into class labels following common practices (1, 2), that is\n\n- for test set: 'mean_human_score <= 3 -> \"not-entailed\"' and 'mean_human_score >= 4 -> \"entailed\"' (anything between 3 and 4 has been removed)\n- for all other splits: 'mean_human_score < 3.5 -> \"not-entailed\"' else '\"entailed\"'\n\nmore details below.", "## Code to generate the dataset" ]
c021bbdca0b644116166a56119e2adf49e575647
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact: Andrés Pitta: [email protected]** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
AndresPitta/sg-reports_labeled
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "license:unknown", "region:us" ]
2022-04-22T13:52:01+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en-US"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Gender language in the reports of the secretary general 2020-2021"}
2022-10-25T09:08:57+00:00
[]
[ "en-US" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #license-unknown #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: Andrés Pitta: URL@URL ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Andrés Pitta: URL@URL", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #license-unknown #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Andrés Pitta: URL@URL", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
49f76692fb17d5f51bfff93c80276ba700010005
## Overview This dataset has been introduced by "Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available [here](https://github.com/decompositional-semantics-initiative/DNC/raw/master/inference_is_everything.zip). ## Dataset curation The following processing is applied - `hypothesis_grammatical` and `judgement_valid` columns are filled with `""` when empty - all columns are stripped - the `entailed` column is renamed `label` - `label` column is encoded with the following mapping `{"not-entailed": 0, "entailed": 1}` - columns `rating` and `good_word` are dropped from `fnplus` dataset ## Code to generate the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict ds = {} for name in ("fnplus", "sprl", "dpr"): # read data with open(f"<path to files>/{name}_data.txt", "r") as f: data = f.read() data = data.split("\n\n") data = [lines.split("\n") for lines in data] data = [dict([col.split(":", maxsplit=1) for col in line if len(col) > 0]) for line in data] df = pd.DataFrame(data) # fill empty hypothesis_grammatical and judgement_valid df["hypothesis_grammatical"] = df["hypothesis_grammatical"].fillna("") df["judgement_valid"] = df["judgement_valid"].fillna("") # fix dtype df["index"] = df["index"].astype(int) # strip for col in df.select_dtypes(object).columns: df[col] = df[col].str.strip() # rename columns df = df.rename(columns={"entailed": "label"}) # encode labels df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1}) # cast to dataset features = Features({ "provenance": Value(dtype="string", id=None), "index": Value(dtype="int64", id=None), "text": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "partof": Value(dtype="string", id=None), "hypothesis_grammatical": Value(dtype="string", id=None), "judgement_valid": Value(dtype="string", id=None), "label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]), }) # select common columns df = df.loc[:, list(features.keys())] ds[name] = Dataset.from_pandas(df, features=features) ds = DatasetDict(ds) ds.push_to_hub("recast_white", token="<token>") ```
pietrolesci/recast_white
[ "region:us" ]
2022-04-22T14:27:37+00:00
{}
2022-04-22T14:34:14+00:00
[]
[]
TAGS #region-us
## Overview This dataset has been introduced by "Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available here. ## Dataset curation The following processing is applied - 'hypothesis_grammatical' and 'judgement_valid' columns are filled with '""' when empty - all columns are stripped - the 'entailed' column is renamed 'label' - 'label' column is encoded with the following mapping '{"not-entailed": 0, "entailed": 1}' - columns 'rating' and 'good_word' are dropped from 'fnplus' dataset ## Code to generate the dataset
[ "## Overview\n\nThis dataset has been introduced by \"Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework\", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available here.", "## Dataset curation\nThe following processing is applied\n\n- 'hypothesis_grammatical' and 'judgement_valid' columns are filled with '\"\"' when empty\n- all columns are stripped\n- the 'entailed' column is renamed 'label'\n- 'label' column is encoded with the following mapping '{\"not-entailed\": 0, \"entailed\": 1}'\n- columns 'rating' and 'good_word' are dropped from 'fnplus' dataset", "## Code to generate the dataset" ]
[ "TAGS\n#region-us \n", "## Overview\n\nThis dataset has been introduced by \"Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework\", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available here.", "## Dataset curation\nThe following processing is applied\n\n- 'hypothesis_grammatical' and 'judgement_valid' columns are filled with '\"\"' when empty\n- all columns are stripped\n- the 'entailed' column is renamed 'label'\n- 'label' column is encoded with the following mapping '{\"not-entailed\": 0, \"entailed\": 1}'\n- columns 'rating' and 'good_word' are dropped from 'fnplus' dataset", "## Code to generate the dataset" ]
9603afe1e507fdc70f80ab3c532872fb217c7cc5
This dataset is the subset of original eli5 dataset from hugging face.
Pavithree/askHistorians
[ "region:us" ]
2022-04-22T15:14:54+00:00
{}
2022-04-22T15:22:10+00:00
[]
[]
TAGS #region-us
This dataset is the subset of original eli5 dataset from hugging face.
[]
[ "TAGS\n#region-us \n" ]
9372640c3a19eeae1396f9137339a8081fe38caa
This dataset is derived from the eli5 dataset vailable on hugging face.
Pavithree/askScience
[ "region:us" ]
2022-04-22T15:39:35+00:00
{}
2022-04-22T15:45:27+00:00
[]
[]
TAGS #region-us
This dataset is derived from the eli5 dataset vailable on hugging face.
[]
[ "TAGS\n#region-us \n" ]
27063178a7482239b710e3fd96a8d8eded299d1d
[Needs More Information] # Dataset Card for dei_article_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Diversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields ID Title Content Basis URL Sentiment ### Data Splits train validate ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
deancgarcia/Diversity
[ "region:us" ]
2022-04-22T15:55:24+00:00
{}
2022-12-08T00:16:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for dei_article_sentiment ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Diversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ID Title Content Basis URL Sentiment ### Data Splits train validate ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for dei_article_sentiment", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nDiversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nID\nTitle\nContent\nBasis \nURL\nSentiment", "### Data Splits\n\ntrain\nvalidate", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "# Dataset Card for dei_article_sentiment", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nDiversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nID\nTitle\nContent\nBasis \nURL\nSentiment", "### Data Splits\n\ntrain\nvalidate", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
765f4ff12812f047f92bd417ed64e5578436ebfe
# Dataset Card for [IU Ontology Trahsed] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/ntcuong777) for adding this dataset.
ntcuong777/iuontology
[ "region:us" ]
2022-04-23T03:02:40+00:00
{}
2022-04-23T13:49:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for [IU Ontology Trahsed] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [IU Ontology Trahsed]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#region-us \n", "# Dataset Card for [IU Ontology Trahsed]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
ef89c8242e095980a51c2264b0439ef0920ff2b1
VQGAN is great, but leaves artifacts that are especially visible around things like faces. It's be great to be able to train a model to fix ('devqganify') these flaws. For this purpose, I've made this dataset, which contains 100k examples, each with - A 512px image - A smaller 256px version of the same image - A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 1024 version from https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92, one of the ones from taming-transformers) and then decoding the result. The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points. Let me know what you come up with :) Usage: ```python from datasets import load_dataset dataset = load_dataset('johnowhitaker/vqgan1024_reconstruction') dataset['train'][0]['image_256'] # Original image dataset['train'][0]['reconstruction_256'] # Reconstructed version ```` Approximate code used to prepare this data: https://colab.research.google.com/drive/1AXzlRMvAIE6krkpFwFnFr2c5SnOsygf-?usp=sharing (let me know if you hit issues) I'll be making a similar dataset with other VQGAN variants and posting progress on devqganify models soon, feel free to get in touch for more info (@johnowhitaker)
johnowhitaker/vqgan1024_reconstruction
[ "region:us" ]
2022-04-23T03:52:52+00:00
{}
2022-04-23T11:50:13+00:00
[]
[]
TAGS #region-us
VQGAN is great, but leaves artifacts that are especially visible around things like faces. It's be great to be able to train a model to fix ('devqganify') these flaws. For this purpose, I've made this dataset, which contains 100k examples, each with - A 512px image - A smaller 256px version of the same image - A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 1024 version from URL one of the ones from taming-transformers) and then decoding the result. The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points. Let me know what you come up with :) Usage: ' Approximate code used to prepare this data: URL (let me know if you hit issues) I'll be making a similar dataset with other VQGAN variants and posting progress on devqganify models soon, feel free to get in touch for more info (@johnowhitaker)
[]
[ "TAGS\n#region-us \n" ]
45fcb031e0510483c13d10b6557aae26fc85df52
This dataset is the subset of original eli5 dataset available in hugging face space
Pavithree/eli5_split
[ "region:us" ]
2022-04-23T07:22:39+00:00
{}
2022-04-23T07:33:53+00:00
[]
[]
TAGS #region-us
This dataset is the subset of original eli5 dataset available in hugging face space
[]
[ "TAGS\n#region-us \n" ]