sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
a4060f6c30fac71147c6f424fd6adb3b0b753f59 | Images from CC12M encoded with VQGAN f16 1024
Script to continue prep is included in the repo if you want more than the ~1.5M images I did here.
VQGAN model:
```
!curl -L 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fckpts%2Flast.ckpt&dl=1' > vqgan_im1024.ckpt
!curl -L 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' > vqgan_im1024.yaml
```
Try it out: TODO | johnowhitaker/vqgan1024_encs_sf | [
"region:us"
] | 2022-04-23T15:07:38+00:00 | {} | 2022-04-23T15:22:37+00:00 | [] | [] | TAGS
#region-us
| Images from CC12M encoded with VQGAN f16 1024
Script to continue prep is included in the repo if you want more than the ~1.5M images I did here.
VQGAN model:
Try it out: TODO | [] | [
"TAGS\n#region-us \n"
] |
44fe0b34f20ba09aa287148447873c1f3992e265 |
# MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE | 51 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The corpora consists of parallel sentences from 51 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/MASSIVE", "en-US", split='train')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "1",
"locale": "fr-FR",
"partition": "train",
"scenario": 16,
"intent": 48,
"utt": "réveille-moi à neuf heures du matin le vendredi",
"annot_utt": "réveille-moi à [time : neuf heures du matin] le [date : vendredi]",
"tokens": [
"réveille-moi",
"à",
"neuf",
"heures",
"du",
"matin",
"le",
"vendredi"
],
"ner_tags": [0, 0, 71, 6, 6, 6, 0, 14],
"worker_id": "22",
"slot_method": {
"slot": ["time", "date"],
"method": ["translation", "translation"]
},
"judgments": {
"worker_id": ["11", "22", "0"],
"intent_score": [2, 1, 1],
"slots_score": [1, 1, 1],
"grammar_score": [3, 4, 4],
"spelling_score": [2, 2, 2],
"language_identification": ["target", "target", "target"]
}
}
```
### Data Fields (taken from Alexa Github)
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Amazon Alexa.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following paper when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
| qanastek/MASSIVE | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:lv",
"language:ml",
"language:mn",
"language:ms",
"language:my",
"language:nb",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:sq",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"arxiv:2204.08582",
"region:us"
] | 2022-04-23T15:23:09+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["found"], "language": ["af", "am", "ar", "az", "bn", "cy", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "km", "kn", "ko", "lv", "ml", "mn", "ms", "my", "nb", "nl", "pl", "pt", "ro", "ru", "sl", "sq", "sv", "sw", "ta", "te", "th", "tl", "tr", "ur", "vi", "zh", "zh"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-class-classification", "named-entity-recognition"], "pretty_name": "MASSIVE", "language_bcp47": ["af-ZA", "am-ET", "ar-SA", "az-AZ", "bn-BD", "cy-GB", "da-DK", "de-DE", "el-GR", "en-US", "es-ES", "fa-IR", "fi-FI", "fr-FR", "he-IL", "hi-IN", "hu-HU", "hy-AM", "id-ID", "is-IS", "it-IT", "ja-JP", "jv-ID", "ka-GE", "km-KH", "kn-IN", "ko-KR", "lv-LV", "ml-IN", "mn-MN", "ms-MY", "my-MM", "nb-NO", "nl-NL", "pl-PL", "pt-PT", "ro-RO", "ru-RU", "sl-SL", "sq-AL", "sv-SE", "sw-KE", "ta-IN", "te-IN", "th-TH", "tl-PH", "tr-TR", "ur-PK", "vi-VN", "zh-CN", "zh-TW"]} | 2022-12-23T21:28:08+00:00 | [
"2204.08582"
] | [
"af",
"am",
"ar",
"az",
"bn",
"cy",
"da",
"de",
"el",
"en",
"es",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"km",
"kn",
"ko",
"lv",
"ml",
"mn",
"ms",
"my",
"nb",
"nl",
"pl",
"pt",
"ro",
"ru",
"sl",
"sq",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ur",
"vi",
"zh"
] | TAGS
#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #task_ids-named-entity-recognition #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Azerbaijani #language-Bengali #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hebrew #language-Hindi #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Khmer #language-Kannada #language-Korean #language-Latvian #language-Malayalam #language-Mongolian #language-Malay (macrolanguage) #language-Burmese #language-Norwegian Bokmål #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovenian #language-Albanian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Turkish #language-Urdu #language-Vietnamese #language-Chinese #arxiv-2204.08582 #region-us
| MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
=================================================================================================================
Table of Contents
-----------------
* [Dataset Card for](#dataset-card-for-needs-more-information)
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- No Warranty
- Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard: URL
* Point of Contact: GitHub
### Dataset Summary
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for 'natural-language-understanding' (NLU) :
* 'intent-classification'
* 'multi-class-classification'
* 'natural-language-understanding'
### Languages
The corpora consists of parallel sentences from 51 languages :
* 'Afrikaans - South Africa (af-ZA)'
* 'Amharic - Ethiopia (am-ET)'
* 'Arabic - Saudi Arabia (ar-SA)'
* 'Azeri - Azerbaijan (az-AZ)'
* 'Bengali - Bangladesh (bn-BD)'
* 'Chinese - China (zh-CN)'
* 'Chinese - Taiwan (zh-TW)'
* 'Danish - Denmark (da-DK)'
* 'German - Germany (de-DE)'
* 'Greek - Greece (el-GR)'
* 'English - United States (en-US)'
* 'Spanish - Spain (es-ES)'
* 'Farsi - Iran (fa-IR)'
* 'Finnish - Finland (fi-FI)'
* 'French - France (fr-FR)'
* 'Hebrew - Israel (he-IL)'
* 'Hungarian - Hungary (hu-HU)'
* 'Armenian - Armenia (hy-AM)'
* 'Indonesian - Indonesia (id-ID)'
* 'Icelandic - Iceland (is-IS)'
* 'Italian - Italy (it-IT)'
* 'Japanese - Japan (ja-JP)'
* 'Javanese - Indonesia (jv-ID)'
* 'Georgian - Georgia (ka-GE)'
* 'Khmer - Cambodia (km-KH)'
* 'Korean - Korea (ko-KR)'
* 'Latvian - Latvia (lv-LV)'
* 'Mongolian - Mongolia (mn-MN)'
* 'Malay - Malaysia (ms-MY)'
* 'Burmese - Myanmar (my-MM)'
* 'Norwegian - Norway (nb-NO)'
* 'Dutch - Netherlands (nl-NL)'
* 'Polish - Poland (pl-PL)'
* 'Portuguese - Portugal (pt-PT)'
* 'Romanian - Romania (ro-RO)'
* 'Russian - Russia (ru-RU)'
* 'Slovanian - Slovania (sl-SL)'
* 'Albanian - Albania (sq-AL)'
* 'Swedish - Sweden (sv-SE)'
* 'Swahili - Kenya (sw-KE)'
* 'Hindi - India (hi-IN)'
* 'Kannada - India (kn-IN)'
* 'Malayalam - India (ml-IN)'
* 'Tamil - India (ta-IN)'
* 'Telugu - India (te-IN)'
* 'Thai - Thailand (th-TH)'
* 'Tagalog - Philippines (tl-PH)'
* 'Turkish - Turkey (tr-TR)'
* 'Urdu - Pakistan (ur-PK)'
* 'Vietnamese - Vietnam (vi-VN)'
* 'Welsh - United Kingdom (cy-GB)'
Load the dataset with HuggingFace
---------------------------------
Dataset Structure
-----------------
### Data Instances
### Data Fields (taken from Alexa Github)
'id': maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
'locale': is the language and country code accoring to ISO-639-1 and ISO-3166.
'partition': is either 'train', 'dev', or 'test', according to the original split in SLURP.
'scenario': is the general domain, aka "scenario" in SLURP terminology, of an utterance
'intent': is the specific intent of an utterance within a domain formatted as '{scenario}\_{intent}'
'utt': the raw utterance text without annotations
'annot\_utt': the text from 'utt' with slot annotations formatted as '[{label} : {entity}]'
'worker\_id': The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
'slot\_method': for each slot in the utterance, whether that slot was a 'translation' (i.e., same expression just in the target language), 'localization' (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or 'unchanged' (i.e., the original en-US slot value was copied over without modification).
'judgments': Each judgment collected for the localized utterance has 6 keys. 'worker\_id' is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
### Data Splits
Dataset Creation
----------------
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Amazon Alexa.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
Additional Information
----------------------
### Dataset Curators
**MASSIVE**: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
**SLURP**: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
**Hugging Face**: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
Please cite the following paper when using this dataset.
| [
"### Dataset Summary\n\n\nMASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for 'natural-language-understanding' (NLU) :\n\n\n* 'intent-classification'\n* 'multi-class-classification'\n* 'natural-language-understanding'",
"### Languages\n\n\nThe corpora consists of parallel sentences from 51 languages :\n\n\n* 'Afrikaans - South Africa (af-ZA)'\n* 'Amharic - Ethiopia (am-ET)'\n* 'Arabic - Saudi Arabia (ar-SA)'\n* 'Azeri - Azerbaijan (az-AZ)'\n* 'Bengali - Bangladesh (bn-BD)'\n* 'Chinese - China (zh-CN)'\n* 'Chinese - Taiwan (zh-TW)'\n* 'Danish - Denmark (da-DK)'\n* 'German - Germany (de-DE)'\n* 'Greek - Greece (el-GR)'\n* 'English - United States (en-US)'\n* 'Spanish - Spain (es-ES)'\n* 'Farsi - Iran (fa-IR)'\n* 'Finnish - Finland (fi-FI)'\n* 'French - France (fr-FR)'\n* 'Hebrew - Israel (he-IL)'\n* 'Hungarian - Hungary (hu-HU)'\n* 'Armenian - Armenia (hy-AM)'\n* 'Indonesian - Indonesia (id-ID)'\n* 'Icelandic - Iceland (is-IS)'\n* 'Italian - Italy (it-IT)'\n* 'Japanese - Japan (ja-JP)'\n* 'Javanese - Indonesia (jv-ID)'\n* 'Georgian - Georgia (ka-GE)'\n* 'Khmer - Cambodia (km-KH)'\n* 'Korean - Korea (ko-KR)'\n* 'Latvian - Latvia (lv-LV)'\n* 'Mongolian - Mongolia (mn-MN)'\n* 'Malay - Malaysia (ms-MY)'\n* 'Burmese - Myanmar (my-MM)'\n* 'Norwegian - Norway (nb-NO)'\n* 'Dutch - Netherlands (nl-NL)'\n* 'Polish - Poland (pl-PL)'\n* 'Portuguese - Portugal (pt-PT)'\n* 'Romanian - Romania (ro-RO)'\n* 'Russian - Russia (ru-RU)'\n* 'Slovanian - Slovania (sl-SL)'\n* 'Albanian - Albania (sq-AL)'\n* 'Swedish - Sweden (sv-SE)'\n* 'Swahili - Kenya (sw-KE)'\n* 'Hindi - India (hi-IN)'\n* 'Kannada - India (kn-IN)'\n* 'Malayalam - India (ml-IN)'\n* 'Tamil - India (ta-IN)'\n* 'Telugu - India (te-IN)'\n* 'Thai - Thailand (th-TH)'\n* 'Tagalog - Philippines (tl-PH)'\n* 'Turkish - Turkey (tr-TR)'\n* 'Urdu - Pakistan (ur-PK)'\n* 'Vietnamese - Vietnam (vi-VN)'\n* 'Welsh - United Kingdom (cy-GB)'\n\n\nLoad the dataset with HuggingFace\n---------------------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields (taken from Alexa Github)\n\n\n'id': maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.\n\n\n'locale': is the language and country code accoring to ISO-639-1 and ISO-3166.\n\n\n'partition': is either 'train', 'dev', or 'test', according to the original split in SLURP.\n\n\n'scenario': is the general domain, aka \"scenario\" in SLURP terminology, of an utterance\n\n\n'intent': is the specific intent of an utterance within a domain formatted as '{scenario}\\_{intent}'\n\n\n'utt': the raw utterance text without annotations\n\n\n'annot\\_utt': the text from 'utt' with slot annotations formatted as '[{label} : {entity}]'\n\n\n'worker\\_id': The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.\n\n\n'slot\\_method': for each slot in the utterance, whether that slot was a 'translation' (i.e., same expression just in the target language), 'localization' (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or 'unchanged' (i.e., the original en-US slot value was copied over without modification).\n\n\n'judgments': Each judgment collected for the localized utterance has 6 keys. 'worker\\_id' is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Who are the source language producers?\n\n\nThe corpus has been produced and uploaded by Amazon Alexa.",
"### Personal and Sensitive Information\n\n\nThe corpora is free of personal or sensitive information.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n**MASSIVE**: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.\n\n\n**SLURP**: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.\n\n\n**Hugging Face**: Labrak Yanis (Not affiliated with the original corpus)",
"### Licensing Information\n\n\nPlease cite the following paper when using this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #task_ids-named-entity-recognition #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Azerbaijani #language-Bengali #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hebrew #language-Hindi #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Khmer #language-Kannada #language-Korean #language-Latvian #language-Malayalam #language-Mongolian #language-Malay (macrolanguage) #language-Burmese #language-Norwegian Bokmål #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovenian #language-Albanian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Turkish #language-Urdu #language-Vietnamese #language-Chinese #arxiv-2204.08582 #region-us \n",
"### Dataset Summary\n\n\nMASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for 'natural-language-understanding' (NLU) :\n\n\n* 'intent-classification'\n* 'multi-class-classification'\n* 'natural-language-understanding'",
"### Languages\n\n\nThe corpora consists of parallel sentences from 51 languages :\n\n\n* 'Afrikaans - South Africa (af-ZA)'\n* 'Amharic - Ethiopia (am-ET)'\n* 'Arabic - Saudi Arabia (ar-SA)'\n* 'Azeri - Azerbaijan (az-AZ)'\n* 'Bengali - Bangladesh (bn-BD)'\n* 'Chinese - China (zh-CN)'\n* 'Chinese - Taiwan (zh-TW)'\n* 'Danish - Denmark (da-DK)'\n* 'German - Germany (de-DE)'\n* 'Greek - Greece (el-GR)'\n* 'English - United States (en-US)'\n* 'Spanish - Spain (es-ES)'\n* 'Farsi - Iran (fa-IR)'\n* 'Finnish - Finland (fi-FI)'\n* 'French - France (fr-FR)'\n* 'Hebrew - Israel (he-IL)'\n* 'Hungarian - Hungary (hu-HU)'\n* 'Armenian - Armenia (hy-AM)'\n* 'Indonesian - Indonesia (id-ID)'\n* 'Icelandic - Iceland (is-IS)'\n* 'Italian - Italy (it-IT)'\n* 'Japanese - Japan (ja-JP)'\n* 'Javanese - Indonesia (jv-ID)'\n* 'Georgian - Georgia (ka-GE)'\n* 'Khmer - Cambodia (km-KH)'\n* 'Korean - Korea (ko-KR)'\n* 'Latvian - Latvia (lv-LV)'\n* 'Mongolian - Mongolia (mn-MN)'\n* 'Malay - Malaysia (ms-MY)'\n* 'Burmese - Myanmar (my-MM)'\n* 'Norwegian - Norway (nb-NO)'\n* 'Dutch - Netherlands (nl-NL)'\n* 'Polish - Poland (pl-PL)'\n* 'Portuguese - Portugal (pt-PT)'\n* 'Romanian - Romania (ro-RO)'\n* 'Russian - Russia (ru-RU)'\n* 'Slovanian - Slovania (sl-SL)'\n* 'Albanian - Albania (sq-AL)'\n* 'Swedish - Sweden (sv-SE)'\n* 'Swahili - Kenya (sw-KE)'\n* 'Hindi - India (hi-IN)'\n* 'Kannada - India (kn-IN)'\n* 'Malayalam - India (ml-IN)'\n* 'Tamil - India (ta-IN)'\n* 'Telugu - India (te-IN)'\n* 'Thai - Thailand (th-TH)'\n* 'Tagalog - Philippines (tl-PH)'\n* 'Turkish - Turkey (tr-TR)'\n* 'Urdu - Pakistan (ur-PK)'\n* 'Vietnamese - Vietnam (vi-VN)'\n* 'Welsh - United Kingdom (cy-GB)'\n\n\nLoad the dataset with HuggingFace\n---------------------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields (taken from Alexa Github)\n\n\n'id': maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.\n\n\n'locale': is the language and country code accoring to ISO-639-1 and ISO-3166.\n\n\n'partition': is either 'train', 'dev', or 'test', according to the original split in SLURP.\n\n\n'scenario': is the general domain, aka \"scenario\" in SLURP terminology, of an utterance\n\n\n'intent': is the specific intent of an utterance within a domain formatted as '{scenario}\\_{intent}'\n\n\n'utt': the raw utterance text without annotations\n\n\n'annot\\_utt': the text from 'utt' with slot annotations formatted as '[{label} : {entity}]'\n\n\n'worker\\_id': The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.\n\n\n'slot\\_method': for each slot in the utterance, whether that slot was a 'translation' (i.e., same expression just in the target language), 'localization' (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or 'unchanged' (i.e., the original en-US slot value was copied over without modification).\n\n\n'judgments': Each judgment collected for the localized utterance has 6 keys. 'worker\\_id' is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Who are the source language producers?\n\n\nThe corpus has been produced and uploaded by Amazon Alexa.",
"### Personal and Sensitive Information\n\n\nThe corpora is free of personal or sensitive information.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n**MASSIVE**: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.\n\n\n**SLURP**: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.\n\n\n**Hugging Face**: Labrak Yanis (Not affiliated with the original corpus)",
"### Licensing Information\n\n\nPlease cite the following paper when using this dataset."
] |
5ccd054e794667994e2fd3b6a5ff01bed70f9acf | VQGAN is great, but leaves artifacts that are especially visible around things like faces.
It's be great to be able to train a model to fix ('devqganify') these flaws.
For this purpose, I've made this dataset, which contains >100k examples, each with
- A 512px image
- A smaller 256px version of the same image
- A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 16384 imagenet version from https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) and then decoding the result.
The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points.
Let me know what you come up with :)
Usage:
```python
from datasets import load_dataset
dataset = load_dataset('johnowhitaker/vqgan1024_reconstruction')
dataset['train'][0]['image_256'] # Original image
dataset['train'][0]['reconstruction_256'] # Reconstructed version
````
Approximate code used to prepare this data (vqgan model was changed for this version): https://colab.research.google.com/drive/1AXzlRMvAIE6krkpFwFnFr2c5SnOsygf-?usp=sharing (let me know if you hit issues)
The VQGAN model used for this version: https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/
See also: https://huggingface.co/datasets/johnowhitaker/vqgan1024_reconstruction (same idea but vqgan with smaller vocab size of 1024) | johnowhitaker/vqgan16k_reconstruction | [
"region:us"
] | 2022-04-23T17:00:28+00:00 | {} | 2022-04-24T05:13:26+00:00 | [] | [] | TAGS
#region-us
| VQGAN is great, but leaves artifacts that are especially visible around things like faces.
It's be great to be able to train a model to fix ('devqganify') these flaws.
For this purpose, I've made this dataset, which contains >100k examples, each with
- A 512px image
- A smaller 256px version of the same image
- A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 16384 imagenet version from URL and then decoding the result.
The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points.
Let me know what you come up with :)
Usage:
'
Approximate code used to prepare this data (vqgan model was changed for this version): URL (let me know if you hit issues)
The VQGAN model used for this version: URL
See also: URL (same idea but vqgan with smaller vocab size of 1024) | [] | [
"TAGS\n#region-us \n"
] |
ebe02645e5511e32c87c79746a75dc2d45bae062 | # Dataset Card for [Kaggle MNLI]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://www.kaggle.com/c/multinli-matched-open-evaluation **
- **Repository: chrishuber/roberta-retrained-mlni **
- **Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets**
- **Leaderboard: 8**
- **Point of Contact: [email protected]**
### Dataset Summary
[These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.]
### Supported Tasks and Leaderboards
Provides train and validation data for sentence pairs with inference labels.
[https://www.kaggle.com/competitions/multinli-matched-open-evaluation/leaderboard]
[https://www.kaggle.com/competitions/multinli-mismatched-open-evaluation/leaderboard]
### Languages
[JSON, Python]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[Reposted from https://www.kaggle.com/c/multinli-matched-open-evaluation and https://www.kaggle.com/c/multinli-mismatched-open-evaluation]
### Source Data
#### Initial Data Collection and Normalization
[Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.]
#### Who are the source language producers?
[Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.]
### Annotations
#### Annotation process
[Crowdsourcing using MechanicalTurk.]
#### Who are the annotators?
[MechanicalTurk users.]
### Personal and Sensitive Information
[None.]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Kaggle]
### Licensing Information
[More Information Needed]
### Citation Information
[https://www.kaggle.com/c/multinli-matched-open-evaluation]
[https://www.kaggle.com/c/multinli-mismatched-open-evaluation]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | chrishuber/kaggle_mnli | [
"arxiv:1704.05426",
"region:us"
] | 2022-04-23T17:16:05+00:00 | {} | 2022-04-23T18:19:52+00:00 | [
"1704.05426"
] | [] | TAGS
#arxiv-1704.05426 #region-us
| # Dataset Card for [Kaggle MNLI]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: chrishuber/roberta-retrained-mlni
- Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets
- Leaderboard: 8
- Point of Contact: chrish@URL
### Dataset Summary
[These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.]
### Supported Tasks and Leaderboards
Provides train and validation data for sentence pairs with inference labels.
[URL
[URL
### Languages
[JSON, Python]
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[Reposted from URL and URL
### Source Data
#### Initial Data Collection and Normalization
[Please see the article at URL which discusses the creation of the MNLI dataset.]
#### Who are the source language producers?
[Please see the article at URL which discusses the creation of the MNLI dataset.]
### Annotations
#### Annotation process
[Crowdsourcing using MechanicalTurk.]
#### Who are the annotators?
[MechanicalTurk users.]
### Personal and Sensitive Information
[None.]
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
[Kaggle]
### Licensing Information
[URL
[URL
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Kaggle MNLI]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL \n- Repository: chrishuber/roberta-retrained-mlni \n- Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets\n- Leaderboard: 8\n- Point of Contact: chrish@URL",
"### Dataset Summary\n\n[These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.]",
"### Supported Tasks and Leaderboards\n\nProvides train and validation data for sentence pairs with inference labels.\n[URL\n[URL",
"### Languages\n\n[JSON, Python]",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\n[Reposted from URL and URL",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n[Please see the article at URL which discusses the creation of the MNLI dataset.]",
"#### Who are the source language producers?\n\n[Please see the article at URL which discusses the creation of the MNLI dataset.]",
"### Annotations",
"#### Annotation process\n\n[Crowdsourcing using MechanicalTurk.]",
"#### Who are the annotators?\n\n[MechanicalTurk users.]",
"### Personal and Sensitive Information\n\n[None.]",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n[Kaggle]",
"### Licensing Information\n\n\n\n\n\n[URL\n[URL",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#arxiv-1704.05426 #region-us \n",
"# Dataset Card for [Kaggle MNLI]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL \n- Repository: chrishuber/roberta-retrained-mlni \n- Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets\n- Leaderboard: 8\n- Point of Contact: chrish@URL",
"### Dataset Summary\n\n[These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.]",
"### Supported Tasks and Leaderboards\n\nProvides train and validation data for sentence pairs with inference labels.\n[URL\n[URL",
"### Languages\n\n[JSON, Python]",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\n[Reposted from URL and URL",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n[Please see the article at URL which discusses the creation of the MNLI dataset.]",
"#### Who are the source language producers?\n\n[Please see the article at URL which discusses the creation of the MNLI dataset.]",
"### Annotations",
"#### Annotation process\n\n[Crowdsourcing using MechanicalTurk.]",
"#### Who are the annotators?\n\n[MechanicalTurk users.]",
"### Personal and Sensitive Information\n\n[None.]",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n[Kaggle]",
"### Licensing Information\n\n\n\n\n\n[URL\n[URL",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
ac3f65840a512ce745231e9d6339c2bc83e61582 |
## Dataset Description
- **Homepage:** None
- **Repository:** None
- **Paper:** None
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
| d0r1h/Shlokam | [
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sn",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-04-24T08:50:02+00:00 | {"annotations_creators": "found", "language_creators": ["found"], "language": ["sn", "en"], "license": "cc-by-3.0", "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "pretty_name": "Shlokam"} | 2022-10-25T09:09:04+00:00 | [] | [
"sn",
"en"
] | TAGS
#language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-Shona #language-English #license-cc-by-3.0 #region-us
|
## Dataset Description
- Homepage: None
- Repository: None
- Paper: None
- Leaderboard:
- Point of Contact:
| [
"## Dataset Description\n\n- Homepage: None\n- Repository: None\n- Paper: None\n- Leaderboard: \n- Point of Contact:"
] | [
"TAGS\n#language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-Shona #language-English #license-cc-by-3.0 #region-us \n",
"## Dataset Description\n\n- Homepage: None\n- Repository: None\n- Paper: None\n- Leaderboard: \n- Point of Contact:"
] |
7a414e80725eac766f2602676dc8b39f80b061e4 |
## Dataset Summary
FaithDial is a faithful knowledge-grounded dialogue benchmark, composed of **50,761** turns spanning **5649** conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) (WoW). In our dialogue setting, we simulate interactions between two speakers: **an information seeker** and **a bot wizard**. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules:
- **First**, it should be truthful by providing information that is attributable to the source knowledge *K*.
- **Second**, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions.
- **Third**, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*.
## Dataset Description
- **Homepage:** [FaithDial](https://mcgill-nlp.github.io/FaithDial/)
- **Repository:** [GitHub](https://github.com/McGill-NLP/FaithDial)
- **Point of Contact:** [Nouha Dziri](mailto:[email protected])
## Language
English
## Data Instance
An example of 'train' looks as follows:
```text
[
{
"utterances": [
... // prior utterances,
{
"history": [
"Have you ever been to a concert? They're so fun!",
"No I cannot as a bot. However, have you been to Madonna's? Her 10th concert was used to help her 13th album called \"Rebel Heart\".",
"Yeah I've heard of it but never went or what it was for. Can you tell me more about it?"
],
"speaker": "Wizard",
"knowledge": "It began on September 9, 2015, in Montreal, Canada, at the Bell Centre and concluded on March 20, 2016, in Sydney, Australia at Allphones Arena.",
"original_response": "It started in September of 2015 and ran all the way through March of 2016. Can you imagine being on the road that long?",
"response": "Sure. The concert started in September 9th of 2015 at Montreal, Canada. It continued till 20th of March of 2016, where it ended at Sydney, Australia.",
"BEGIN": [
"Hallucination",
"Entailment"
],
"VRM": [
"Disclosure",
"Question"
]
},
... // more utterances
]
},
... // more dialogues
]
```
If the `original_response` is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos.
## Data Fields
- `history`: `List[string]`. The dialogue history.
- `knowledge`: `string`. The source knowkedge on which the bot wizard should ground its response.
- `speaker`: `string`. The current speaker.
- `original response`: `string`. The WoW original response before editing it.
- `response`: `string`. The new Wizard response.
- `BEGIN`: `List[string]`. The BEGIN labels for the Wizard response.
- `VRM`: `List[string]`. The VRM labels for the wizard response.
## Data Splits
- `Train`: 36809 turns
- `Valid`: 6851 turns
- `Test`: 7101 turns
`Valid` includes both the `seen` and the `unseen` data splits from WoW. The same applies to `Test`. We also include those splits for FaithDial valid and test data.
## Annotations
Following the guidelines for ethical crowdsourcing outlined in [Sheehan. 2018](https://www.tandfonline.com/doi/abs/10.1080/03637751.2017.1342043),
we hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months.
# Who are the annotators?
To be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS.
## Personal and Sensitive Information
Seeker utterances in FaithDial may contain personal and sensitive information.
## Social Impact of Dataset
In recent years, the conversational AI market has seen
a proliferation of a variety of applications—which are powered by large pre-trained LMs—that span
across a broad range of domains, such as customer
support, education, e-commerce, health, entertainment, etc. Ensuring that
these systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable.
## Licensing Information
MIT
## Citation Information
```bibtex
@article{dziri2022faithdial,
title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue},
author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva},
journal={arXiv preprint, arXiv:2204.10757},
year={2022},
url={https://arxiv.org/abs/2204.10757}
}
```
| McGill-NLP/FaithDial | [
"task_categories:conversational",
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"language:en",
"license:mit",
"faithful-dialogue-modeling",
"trustworthy-dialogue-modeling",
"arxiv:2204.10757",
"region:us"
] | 2022-04-24T22:10:52+00:00 | {"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "task_categories": ["conversational", "text-generation"], "task_ids": ["dialogue-modeling"], "pretty_name": "A Faithful Benchmark for Information-Seeking Dialogue", "tags": ["faithful-dialogue-modeling", "trustworthy-dialogue-modeling"]} | 2023-02-05T04:09:45+00:00 | [
"2204.10757"
] | [
"en"
] | TAGS
#task_categories-conversational #task_categories-text-generation #task_ids-dialogue-modeling #annotations_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100k #language-English #license-mit #faithful-dialogue-modeling #trustworthy-dialogue-modeling #arxiv-2204.10757 #region-us
|
## Dataset Summary
FaithDial is a faithful knowledge-grounded dialogue benchmark, composed of 50,761 turns spanning 5649 conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in Wizard of Wikipedia (WoW). In our dialogue setting, we simulate interactions between two speakers: an information seeker and a bot wizard. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules:
- First, it should be truthful by providing information that is attributable to the source knowledge *K*.
- Second, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions.
- Third, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*.
## Dataset Description
- Homepage: FaithDial
- Repository: GitHub
- Point of Contact: Nouha Dziri
## Language
English
## Data Instance
An example of 'train' looks as follows:
If the 'original_response' is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos.
## Data Fields
- 'history': 'List[string]'. The dialogue history.
- 'knowledge': 'string'. The source knowkedge on which the bot wizard should ground its response.
- 'speaker': 'string'. The current speaker.
- 'original response': 'string'. The WoW original response before editing it.
- 'response': 'string'. The new Wizard response.
- 'BEGIN': 'List[string]'. The BEGIN labels for the Wizard response.
- 'VRM': 'List[string]'. The VRM labels for the wizard response.
## Data Splits
- 'Train': 36809 turns
- 'Valid': 6851 turns
- 'Test': 7101 turns
'Valid' includes both the 'seen' and the 'unseen' data splits from WoW. The same applies to 'Test'. We also include those splits for FaithDial valid and test data.
## Annotations
Following the guidelines for ethical crowdsourcing outlined in Sheehan. 2018,
we hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months.
# Who are the annotators?
To be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS.
## Personal and Sensitive Information
Seeker utterances in FaithDial may contain personal and sensitive information.
## Social Impact of Dataset
In recent years, the conversational AI market has seen
a proliferation of a variety of applications—which are powered by large pre-trained LMs—that span
across a broad range of domains, such as customer
support, education, e-commerce, health, entertainment, etc. Ensuring that
these systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable.
## Licensing Information
MIT
| [
"## Dataset Summary\nFaithDial is a faithful knowledge-grounded dialogue benchmark, composed of 50,761 turns spanning 5649 conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in Wizard of Wikipedia (WoW). In our dialogue setting, we simulate interactions between two speakers: an information seeker and a bot wizard. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules: \n- First, it should be truthful by providing information that is attributable to the source knowledge *K*. \n- Second, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions. \n- Third, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*.",
"## Dataset Description\n\n- Homepage: FaithDial\n- Repository: GitHub\n- Point of Contact: Nouha Dziri",
"## Language\nEnglish",
"## Data Instance\n\nAn example of 'train' looks as follows: \n\nIf the 'original_response' is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos.",
"## Data Fields\n\n - 'history': 'List[string]'. The dialogue history.\n - 'knowledge': 'string'. The source knowkedge on which the bot wizard should ground its response.\n - 'speaker': 'string'. The current speaker.\n - 'original response': 'string'. The WoW original response before editing it.\n - 'response': 'string'. The new Wizard response.\n - 'BEGIN': 'List[string]'. The BEGIN labels for the Wizard response.\n - 'VRM': 'List[string]'. The VRM labels for the wizard response.",
"## Data Splits\n\n- 'Train': 36809 turns\n- 'Valid': 6851 turns\n- 'Test': 7101 turns\n\n'Valid' includes both the 'seen' and the 'unseen' data splits from WoW. The same applies to 'Test'. We also include those splits for FaithDial valid and test data.",
"## Annotations\nFollowing the guidelines for ethical crowdsourcing outlined in Sheehan. 2018, \nwe hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months.",
"# Who are the annotators?\n\nTo be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS.",
"## Personal and Sensitive Information\nSeeker utterances in FaithDial may contain personal and sensitive information.",
"## Social Impact of Dataset\n In recent years, the conversational AI market has seen\na proliferation of a variety of applications—which are powered by large pre-trained LMs—that span\nacross a broad range of domains, such as customer\nsupport, education, e-commerce, health, entertainment, etc. Ensuring that\nthese systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable.",
"## Licensing Information\n\nMIT"
] | [
"TAGS\n#task_categories-conversational #task_categories-text-generation #task_ids-dialogue-modeling #annotations_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100k #language-English #license-mit #faithful-dialogue-modeling #trustworthy-dialogue-modeling #arxiv-2204.10757 #region-us \n",
"## Dataset Summary\nFaithDial is a faithful knowledge-grounded dialogue benchmark, composed of 50,761 turns spanning 5649 conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in Wizard of Wikipedia (WoW). In our dialogue setting, we simulate interactions between two speakers: an information seeker and a bot wizard. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules: \n- First, it should be truthful by providing information that is attributable to the source knowledge *K*. \n- Second, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions. \n- Third, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*.",
"## Dataset Description\n\n- Homepage: FaithDial\n- Repository: GitHub\n- Point of Contact: Nouha Dziri",
"## Language\nEnglish",
"## Data Instance\n\nAn example of 'train' looks as follows: \n\nIf the 'original_response' is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos.",
"## Data Fields\n\n - 'history': 'List[string]'. The dialogue history.\n - 'knowledge': 'string'. The source knowkedge on which the bot wizard should ground its response.\n - 'speaker': 'string'. The current speaker.\n - 'original response': 'string'. The WoW original response before editing it.\n - 'response': 'string'. The new Wizard response.\n - 'BEGIN': 'List[string]'. The BEGIN labels for the Wizard response.\n - 'VRM': 'List[string]'. The VRM labels for the wizard response.",
"## Data Splits\n\n- 'Train': 36809 turns\n- 'Valid': 6851 turns\n- 'Test': 7101 turns\n\n'Valid' includes both the 'seen' and the 'unseen' data splits from WoW. The same applies to 'Test'. We also include those splits for FaithDial valid and test data.",
"## Annotations\nFollowing the guidelines for ethical crowdsourcing outlined in Sheehan. 2018, \nwe hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months.",
"# Who are the annotators?\n\nTo be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS.",
"## Personal and Sensitive Information\nSeeker utterances in FaithDial may contain personal and sensitive information.",
"## Social Impact of Dataset\n In recent years, the conversational AI market has seen\na proliferation of a variety of applications—which are powered by large pre-trained LMs—that span\nacross a broad range of domains, such as customer\nsupport, education, e-commerce, health, entertainment, etc. Ensuring that\nthese systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable.",
"## Licensing Information\n\nMIT"
] |
5af3b4f0df36436a071954af1d499b9753c0f27b |
# TAU Spatial Room Impulse Response Database (TAU-SRIR DB)
## Important
**This is a copy from the Zenodo Original one**
## Description
[Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
AUTHORS
**Tampere University**
- Archontis Politis ([contact](mailto:[email protected]), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
- Sharath Adavanne ([contact](mailto:[email protected]), [profile](https://www.aane.in))
- Tuomas Virtanen ([contact](mailto:[email protected]), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
**Data Collection 2019-2020**
- Archontis Politis
- Aapo Hakala
- Ali Gohar
**Data Collection 2017-2018**
- Sharath Adavanne
- Aapo Hakala
- Eemi Fagerlund
- Aino Koskimies
The **TAU Spatial Room Impulse Response Database (TAU-SRIR DB)** database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are:
- Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural).
- Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios.
- Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods.
- Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms.
The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement [637422 EVERYSOUND](https://cordis.europa.eu/project/id/637422).
[](https://erc.europa.eu/)
> **NOTE**: This database is a work-in-progress. We intend to publish additional rooms, additional formats, and potentially higher-fidelity versions of the captured responses in the near future, as new versions of the database in this repository.
## Report and reference
A compact description of the dataset, recording setup, recording procedure, and extraction can be found in:
>Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). **A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection**. In _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)_, Tokyo, Japan.
available [here](https://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Politis_88.pdf). A more detailed report specifically focusing on the dataset collection and properties will follow.
## Aim
The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios:
- monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions,
- monophonic and multichannel polyphonic sound events in multi-room reverberant conditions,
- single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios,
- single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios,
- sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios.
## Specifications
The SRIRs were captured using an [Eigenmike](https://mhacoustics.com/products) spherical microphone array. A [Genelec G Three loudspeaker](https://www.genelec.com/g-three) was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness.
The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows:
1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise.
2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms.
3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise.
4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise.
5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise.
6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise.
7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise.
8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise.
9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise.
The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a _close_ and a _far_ one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, _close_ and _far_, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room.
Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights.
The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier.
The following table summarizes the above properties for the currently available rooms:
| | Room name | Room type | Traj. type | # ranges | # trajectory groups | # heights/group | # trajectories (total) | # RIRs/DOAs |
|---|--------------------------|----------------------------|------------|-------------|-----------------------|---------------------|------------------------|-------------|
| 1 | Bomb shelter | Complex/semi-open | Circular | 2 | 2 | 9 | 18 | 6480 |
| 2 | Gym | Rectangular/large | Circular | 2 | 2 | 9 | 18 | 6480 |
| 3 | PB132 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 4 | PC226 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 5 | SA203 Lecture hall | Trapezoidal/large | Linear | 3 | 6 | 3 | 18 | 1594 |
| 6 | SC203 Classroom | Rectangular/medium | Linear | 2 | 4 | 5 | 20 | 1592 |
| 7 | SE203 Classroom | Rectangular/large | Linear | 2 | 4 | 4 | 16 | 1760 |
| 8 | TB103 Classroom | Trapezoidal/large | Linear | 2 | 4 | 3 | 12 | 1184 |
| 9 | TC352 Meeting room | Rectangular/small | Circular | 1 | 2 | 9 | 18 | 6480 |
More details on the trajectory geometries can be found in the database info file (`measinfo.mat`).
## Recording formats
The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
**For the first-order ambisonics (FOA):**
\begin{eqnarray}
H_1(\phi, \theta, f) &=& 1 \\
H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
H_3(\phi, \theta, f) &=& \sin(\theta) \\
H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
\end{eqnarray}
The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found [here](https://github.com/polarch/Spherical-Array-Processing).
**For the tetrahedral microphone array (MIC):**
The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
\begin{eqnarray}
M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
\end{eqnarray}
Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
\begin{equation}
H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
\end{equation}
where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
## Reference directions-of-arrival
For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal.
The DOAs are provided as Cartesian components [x, y, z] of unit length vectors.
## Scene generator
A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset, and has been ported to Python from the original version written in Matlab.
The generator can be found [**here**](https://github.com/danielkrause/DCASE2022-data-generator), along with more details on its use.
The generator at the moment is set to work with the [NIGENS](https://zenodo.org/record/2535878) sound event sample database, and the [FSD50K](https://zenodo.org/record/4060432) sound event database, but additional sample banks can be added with small modifications.
The dataset together with the generator has been used by the authors in the following public challenges:
- [DCASE 2019 Challenge Task 3](https://dcase.community/challenge2019/task-sound-event-localization-and-detection), to generate the **TAU Spatial Sound Events 2019** dataset ([development](https://doi.org/10.5281/zenodo.2599196)/[evaluation](https://doi.org/10.5281/zenodo.3377088))
- [DCASE 2020 Challenge Task 3](https://dcase.community/challenge2020/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2020**](https://doi.org/10.5281/zenodo.4064792) dataset
- [DCASE2021 Challenge Task 3](https://dcase.community/challenge2021/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset
- [DCASE2022 Challenge Task 3](https://dcase.community/challenge2022/task-sound-event-localization-and-detection), to generate additional [SELD synthetic mixtures for training the task baseline](https://doi.org/10.5281/zenodo.6406873)
> **NOTE**: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact [[email protected]](mailto:[email protected]), or [[email protected]](mailto:[email protected]).
## Dataset structure
The dataset contains a folder of the SRIRs (`TAU-SRIR_DB`), with all the SRIRs per room in a single _mat_ file, e.g. `rirs_09_tb103.mat`. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an `rirs` array of 4x3 structures, each with the fields `mic` and `foa`. Selecting e.g. the 2nd trajectory and 3rd height with `rirs(2,3)` returns `mic` and `foa` fields with an array of size `[7200x4x114]` on each. The array contains the SRIRs for the specific format, and it is arranged as `[samples x channels x DOAs]`, meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory.
The file `rirdata.mat` contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the `rirdata.room` field, which is an array of 9 structures itself, one per room. Checking for example `rirdata.room(8)` returns the name of the specific room (_tb103_), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. `rirdata.room(8).rirs(2,3).doa_xyz` which returns an array of size `[114x3]`. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room `TB103`.
The file `measinfo.mat` contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the `measinfo.mat`, one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths.
Finally, the dataset contains a folder of spatial ambient noise recordings (`TAU-SNoise_DB`), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples.
## Download
The files `TAU-SRIR_DB.z01`, ..., `TAU-SRIR_DB.zip` contain the SRIRs and measurement info files.
The files `TAU-SNoise_DB.z01`, ..., `TAU-SNoise_DB.zip` contain the ambient noise recordings.
Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
Combine the split archive to a single archive:
>zip -s 0 split.zip --out single.zip
Extract the single archive using unzip:
>unzip single.zip
# License
The database is published under a custom **open non-commercial with attribution** license. It can be found in the `LICENSE.txt` file that accompanies the data.
| Fhrozen/tau_srir_db | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:n<1K",
"source_datasets:unknown",
"license:unknown",
"audio-slot-filling",
"region:us"
] | 2022-04-25T01:54:54+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "license": "unknown", "size_categories": ["n<1K"], "source_datasets": ["unknown"], "task_categories": ["audio-classification"], "task_ids": [], "tags": ["audio-slot-filling"]} | 2022-12-03T03:27:05+00:00 | [] | [] | TAGS
#task_categories-audio-classification #annotations_creators-unknown #language_creators-unknown #size_categories-n<1K #source_datasets-unknown #license-unknown #audio-slot-filling #region-us
| TAU Spatial Room Impulse Response Database (TAU-SRIR DB)
========================================================
Important
---------
This is a copy from the Zenodo Original one
Description
-----------
Audio Research Group / Tampere University
AUTHORS
Tampere University
* Archontis Politis (contact, profile)
* Sharath Adavanne (contact, profile)
* Tuomas Virtanen (contact, profile)
Data Collection 2019-2020
* Archontis Politis
* Aapo Hakala
* Ali Gohar
Data Collection 2017-2018
* Sharath Adavanne
* Aapo Hakala
* Eemi Fagerlund
* Aino Koskimies
The TAU Spatial Room Impulse Response Database (TAU-SRIR DB) database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are:
* Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural).
* Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios.
* Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods.
* Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms.
The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement 637422 EVERYSOUND.
. A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. In *Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)*, Tokyo, Japan.
>
>
>
available here. A more detailed report specifically focusing on the dataset collection and properties will follow.
Aim
---
The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios:
* monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions,
* monophonic and multichannel polyphonic sound events in multi-room reverberant conditions,
* single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios,
* single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios,
* sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios.
Specifications
--------------
The SRIRs were captured using an Eigenmike spherical microphone array. A Genelec G Three loudspeaker was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness.
The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows:
1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise.
2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms.
3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise.
4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise.
5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise.
6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise.
7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise.
8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise.
9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise.
The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a *close* and a *far* one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, *close* and *far*, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room.
Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights.
The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier.
The following table summarizes the above properties for the currently available rooms:
More details on the trajectory geometries can be found in the database info file ('URL').
Recording formats
-----------------
The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
For the first-order ambisonics (FOA):
\begin{eqnarray}
H\_1(\phi, \theta, f) &=& 1 \
H\_2(\phi, \theta, f) &=& \sin(\phi) \* \cos(\theta) \
H\_3(\phi, \theta, f) &=& \sin(\theta) \
H\_4(\phi, \theta, f) &=& \cos(\phi) \* \cos(\theta)
\end{eqnarray}
The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found here.
For the tetrahedral microphone array (MIC):
The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
\begin{eqnarray}
M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\
M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\
M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\
M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
\end{eqnarray}
Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
\begin{equation}
H\_m(\phi\_m, \theta\_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum\_{n=0}^{30} \frac{i^{n-1}}{h\_n'^{(2)}(\omega R/c)}(2n+1)P\_n(\cos(\gamma\_m))
\end{equation}
where $m$ is the channel number, $(\phi\_m, \theta\_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma\_m)$ is the cosine angle between the microphone and the DOA, and $P\_n$ is the unnormalized Legendre polynomial of degree $n$, and $h\_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found here.
Reference directions-of-arrival
-------------------------------
For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal.
The DOAs are provided as Cartesian components [x, y, z] of unit length vectors.
Scene generator
---------------
A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the TAU-NIGENS Spatial Sound Events 2021 dataset, and has been ported to Python from the original version written in Matlab.
The generator can be found here, along with more details on its use.
The generator at the moment is set to work with the NIGENS sound event sample database, and the FSD50K sound event database, but additional sample banks can be added with small modifications.
The dataset together with the generator has been used by the authors in the following public challenges:
* DCASE 2019 Challenge Task 3, to generate the TAU Spatial Sound Events 2019 dataset (development/evaluation)
* DCASE 2020 Challenge Task 3, to generate the TAU-NIGENS Spatial Sound Events 2020 dataset
* DCASE2021 Challenge Task 3, to generate the TAU-NIGENS Spatial Sound Events 2021 dataset
* DCASE2022 Challenge Task 3, to generate additional SELD synthetic mixtures for training the task baseline
>
> NOTE: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact URL@URL, or archontis.politis@URL.
>
>
>
Dataset structure
-----------------
The dataset contains a folder of the SRIRs ('TAU-SRIR\_DB'), with all the SRIRs per room in a single *mat* file, e.g. 'rirs\_09\_tb103.mat'. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an 'rirs' array of 4x3 structures, each with the fields 'mic' and 'foa'. Selecting e.g. the 2nd trajectory and 3rd height with 'rirs(2,3)' returns 'mic' and 'foa' fields with an array of size '[7200x4x114]' on each. The array contains the SRIRs for the specific format, and it is arranged as '[samples x channels x DOAs]', meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory.
The file 'URL' contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the 'URL' field, which is an array of 9 structures itself, one per room. Checking for example 'URL(8)' returns the name of the specific room (*tb103*), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. 'URL(8).rirs(2,3).doa\_xyz' which returns an array of size '[114x3]'. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room 'TB103'.
The file 'URL' contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the 'URL', one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths.
Finally, the dataset contains a folder of spatial ambient noise recordings ('TAU-SNoise\_DB'), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples.
Download
--------
The files 'TAU-SRIR\_DB.z01', ..., 'TAU-SRIR\_DB.zip' contain the SRIRs and measurement info files.
The files 'TAU-SNoise\_DB.z01', ..., 'TAU-SNoise\_DB.zip' contain the ambient noise recordings.
Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
Combine the split archive to a single archive:
>
> zip -s 0 URL --out URL
>
>
>
Extract the single archive using unzip:
>
> unzip URL
>
>
>
License
=======
The database is published under a custom open non-commercial with attribution license. It can be found in the 'URL' file that accompanies the data.
| [] | [
"TAGS\n#task_categories-audio-classification #annotations_creators-unknown #language_creators-unknown #size_categories-n<1K #source_datasets-unknown #license-unknown #audio-slot-filling #region-us \n"
] |
f9c3dafb9b947ddeb04e0b4fcb5c3a904d9105e3 |
## Dataset Description
- **Homepage:** None
- **Repository:** [https://github.com/d0r1h/ILC]
- **Paper:** None
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed] | d0r1h/ILC | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"legal",
"region:us"
] | 2022-04-25T06:13:24+00:00 | {"language": ["en"], "license": "cc-by-3.0", "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization"], "pretty_name": "ILC", "tags": ["legal"]} | 2023-09-02T11:03:40+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-3.0 #legal #region-us
|
## Dataset Description
- Homepage: None
- Repository: [URL
- Paper: None
- Leaderboard:
- Point of Contact: | [
"## Dataset Description\n- Homepage: None\n- Repository: [URL\n- Paper: None\n- Leaderboard: \n- Point of Contact:"
] | [
"TAGS\n#task_categories-summarization #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-3.0 #legal #region-us \n",
"## Dataset Description\n- Homepage: None\n- Repository: [URL\n- Paper: None\n- Leaderboard: \n- Point of Contact:"
] |
f23677a6713b1558fe0e6ba3ec8db76ec8e49e98 | ## Overview
Original dataset available [here](https://wellecks.github.io/dialogue_nli/).
## Dataset curation
Original `label` column is renamed `original_label`. The original classes are renamed as follows
```
{"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
```
and encoded with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
and stored in the newly created column `label`.
The following splits and the corresponding columns are present in the original files
```
train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
verified_test {'dtype', 'annotation3', 'sentence1', 'sentence2', 'annotation1', 'annotation2', 'original_label', 'label', 'triple2', 'triple1'}
extra_test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
extra_dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
extra_train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_havenot {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_attributes {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_likedislike {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
```
Note that I only keep the common columns, which means that I drop "annotation{1, 2, 3}" from `verified_test`.
Note that there are some splits with the same instances, as found by matching on "original_label", "sentence1", "sentence2".
## Code to create dataset
```python
import pandas as pd
from pathlib import Path
import json
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, Sequence
# load data
ds = {}
for path in Path(".").rglob("<path to folder>/*.jsonl"):
print(path, flush=True)
with path.open("r") as fl:
data = fl.read()
try:
d = json.loads(data, encoding="utf-8")
except json.JSONDecodeError as error:
print(error)
df = pd.DataFrame(d)
# encode labels
df["original_label"] = df["label"]
df["label"] = df["label"].map({"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds[path.name.split(".")[0]] = df
# prettify names of data splits
datasets = {
k.replace("dialogue_nli_", "").replace("uu_", "").lower(): v
for k, v in ds.items()
}
datasets.keys()
#> dict_keys(['train', 'dev', 'test', 'verified_test', 'extra_test', 'extra_dev', 'extra_train', 'valid_havenot', 'valid_attributes', 'valid_likedislike'])
# cast to datasets using only common columns
features = Features({
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"sentence1": Value(dtype="string", id=None),
"sentence2": Value(dtype="string", id=None),
"triple1": Sequence(feature=Value(dtype="string", id=None), length=3),
"triple2": Sequence(feature=Value(dtype="string", id=None), length=3),
"dtype": Value(dtype="string", id=None),
"id": Value(dtype="string", id=None),
"original_label": Value(dtype="string", id=None),
})
ds = {}
for name, df in datasets.items():
if "id" not in df.columns:
df["id"] = ""
ds[name] = Dataset.from_pandas(df.loc[:, list(features.keys())], features=features)
ds = DatasetDict(ds)
ds.push_to_hub("dialogue_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["original_label", "sentence1", "sentence2"],
how="inner",
).shape[0],
)
#> train - dev: 58
#> train - test: 98
#> train - verified_test: 90
#> train - extra_test: 0
#> train - extra_dev: 0
#> train - extra_train: 0
#> train - valid_havenot: 0
#> train - valid_attributes: 0
#> train - valid_likedislike: 0
#> dev - test: 19
#> dev - verified_test: 19
#> dev - extra_test: 0
#> dev - extra_dev: 75
#> dev - extra_train: 75
#> dev - valid_havenot: 75
#> dev - valid_attributes: 75
#> dev - valid_likedislike: 75
#> test - verified_test: 12524
#> test - extra_test: 34
#> test - extra_dev: 0
#> test - extra_train: 0
#> test - valid_havenot: 0
#> test - valid_attributes: 0
#> test - valid_likedislike: 0
#> verified_test - extra_test: 29
#> verified_test - extra_dev: 0
#> verified_test - extra_train: 0
#> verified_test - valid_havenot: 0
#> verified_test - valid_attributes: 0
#> verified_test - valid_likedislike: 0
#> extra_test - extra_dev: 0
#> extra_test - extra_train: 0
#> extra_test - valid_havenot: 0
#> extra_test - valid_attributes: 0
#> extra_test - valid_likedislike: 0
#> extra_dev - extra_train: 250946
#> extra_dev - valid_havenot: 250946
#> extra_dev - valid_attributes: 250946
#> extra_dev - valid_likedislike: 250946
#> extra_train - valid_havenot: 250946
#> extra_train - valid_attributes: 250946
#> extra_train - valid_likedislike: 250946
#> valid_havenot - valid_attributes: 250946
#> valid_havenot - valid_likedislike: 250946
#> valid_attributes - valid_likedislike: 250946
``` | pietrolesci/dialogue_nli | [
"region:us"
] | 2022-04-25T07:21:01+00:00 | {} | 2022-04-25T07:39:10+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Original dataset available here.
## Dataset curation
Original 'label' column is renamed 'original_label'. The original classes are renamed as follows
and encoded with the following mapping
and stored in the newly created column 'label'.
The following splits and the corresponding columns are present in the original files
Note that I only keep the common columns, which means that I drop "annotation{1, 2, 3}" from 'verified_test'.
Note that there are some splits with the same instances, as found by matching on "original_label", "sentence1", "sentence2".
## Code to create dataset
| [
"## Overview\nOriginal dataset available here.",
"## Dataset curation\nOriginal 'label' column is renamed 'original_label'. The original classes are renamed as follows\n\n\n\nand encoded with the following mapping\n\n\n\nand stored in the newly created column 'label'.\n\n\nThe following splits and the corresponding columns are present in the original files\n\n\n\nNote that I only keep the common columns, which means that I drop \"annotation{1, 2, 3}\" from 'verified_test'.\nNote that there are some splits with the same instances, as found by matching on \"original_label\", \"sentence1\", \"sentence2\".",
"## Code to create dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\nOriginal dataset available here.",
"## Dataset curation\nOriginal 'label' column is renamed 'original_label'. The original classes are renamed as follows\n\n\n\nand encoded with the following mapping\n\n\n\nand stored in the newly created column 'label'.\n\n\nThe following splits and the corresponding columns are present in the original files\n\n\n\nNote that I only keep the common columns, which means that I drop \"annotation{1, 2, 3}\" from 'verified_test'.\nNote that there are some splits with the same instances, as found by matching on \"original_label\", \"sentence1\", \"sentence2\".",
"## Code to create dataset"
] |
1b34f1c8b073c6782b68dc3c5c10ef6356a284d3 | ## Overview
Original dataset [here](https://github.com/decompositional-semantics-initiative/DNC).
This dataset has been proposed in [Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation](https://www.aclweb.org/anthology/D18-1007/).
## Dataset curation
This version of the dataset does not include the `type-of-inference` "KG" as its label set is
`[1, 2, 3, 4, 5]` while here we focus on NLI-related label sets, i.e. `[entailed, not-entailed]`.
For this reason, I named the dataset DNLI for _Diverse_ NLI, as in [Liu et al 2020](https://aclanthology.org/2020.conll-1.48/), instead of DNC.
This version of the dataset contains columns from the `*_data.json` and the `*_metadata.json` files available in the repo.
In the original repo, each data file has the following keys and values:
- `context`: The context sentence for the NLI pair. The context is already tokenized.
- `hypothesis`: The hypothesis sentence for the NLI pair. The hypothesis is already tokenized.
- `label`: The label for the NLI pair
- `label-set`: The set of possible labels for the specific NLI pair
- `binary-label`: A `True` or `False` label. See the paper for details on how we convert the `label` into a binary label.
- `split`: This can be `train`, `dev`, or `test`.
- `type-of-inference`: A string indicating what type of inference is tested in this example.
- `pair-id`: A unique integer id for the NLI pair. The `pair-id` is used to find the corresponding metadata for any given NLI pair
while each metadata file has the following columns
- `pair-id`: A unique integer id for the NLI pair.
- `corpus`: The original corpus where this example came from.
- `corpus-sent-id`: The id of the sentence (or example) in the original dataset that we recast.
- `corpus-license`: The license for the data from the original dataset.
- `creation-approach`: Determines the method used to recast this example. Options are `automatic`, `manual`, or `human-labeled`.
- `misc`: A dictionary of other relevant information. This is an optional field.
The files are merged on the `pair-id` key. I **do not** include the `misc` column as it is not essential for NLI.
NOTE: the label mapping is **not** the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them
with the following mapping `{"not-entailed": 0, "entailed": 1}`.
NOTE: some instances are present in multiple splits (matching performed by exact matching on "context", "hypothesis", and "label").
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict, Sequence
from pathlib import Path
paths = {
"train": "<path_to_folder>/DNC-master/train",
"dev": "<path_to_folder>/DNC-master/dev",
"test": "<path_to_folder>/DNC-master/test",
}
# read all data files
dfs = []
for split, path in paths.items():
for f_name in Path(path).rglob("*_data.json"):
df = pd.read_json(str(f_name))
df["file_split_data"] = split
dfs.append(df)
data = pd.concat(dfs, ignore_index=False, axis=0)
# read all metadata files
meta_dfs = []
for split, path in paths.items():
for f_name in Path(path).rglob("*_metadata.json"):
df = pd.read_json(str(f_name))
meta_dfs.append(df)
metadata = pd.concat(meta_dfs, ignore_index=False, axis=0)
# merge
dataset = pd.merge(data, metadata, on="pair-id", how="left")
# check that the split column reflects file splits
assert sum(dataset["split"] != dataset["file_split_data"]) == 0
dataset = dataset.drop(columns=["file_split_data"])
# fix `binary-label` column
dataset.loc[~dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = False
dataset.loc[dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = True
# fix datatype
dataset["corpus-sent-id"] = dataset["corpus-sent-id"].astype(str)
# order columns as shown in the README.md
columns = [
"context",
"hypothesis",
"label",
"label-set",
"binary-label",
"split",
"type-of-inference",
"pair-id",
"corpus",
"corpus-sent-id",
"corpus-license",
"creation-approach",
"misc",
]
dataset = dataset.loc[:, columns]
# remove misc column
dataset = dataset.drop(columns=["misc"])
# remove KG for NLI
dataset.loc[(dataset["label"].isin([1, 2, 3, 4, 5])), "type-of-inference"].value_counts()
# > the only split with label-set [1, 2, 3, 4, 5], so remove as we focus on NLI
dataset = dataset.loc[~(dataset["type-of-inference"] == "KG")]
# encode labels
dataset["label"] = dataset["label"].map({"not-entailed": 0, "entailed": 1})
# fill NA in label-set
dataset["label-set"] = dataset["label-set"].ffill()
features = Features(
{
"context": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
"label-set": Sequence(length=2, feature=Value(dtype="string")),
"binary-label": Value(dtype="bool"),
"split": Value(dtype="string"),
"type-of-inference": Value(dtype="string"),
"pair-id": Value(dtype="int64"),
"corpus": Value(dtype="string"),
"corpus-sent-id": Value(dtype="string"),
"corpus-license": Value(dtype="string"),
"creation-approach": Value(dtype="string"),
}
)
dataset_splits = {}
for split in ("train", "dev", "test"):
df_split = dataset.loc[dataset["split"] == split]
dataset_splits[split] = Dataset.from_pandas(df_split, features=features)
dataset_splits = DatasetDict(dataset_splits)
dataset_splits.push_to_hub("pietrolesci/dnli", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset_splits.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset_splits[i].to_pandas(),
dataset_splits[j].to_pandas(),
on=["context", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> train - dev: 127
#> train - test: 55
#> dev - test: 54
```
| pietrolesci/dnc | [
"region:us"
] | 2022-04-25T07:54:56+00:00 | {} | 2022-04-25T07:59:06+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Original dataset here.
This dataset has been proposed in Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation.
## Dataset curation
This version of the dataset does not include the 'type-of-inference' "KG" as its label set is
'[1, 2, 3, 4, 5]' while here we focus on NLI-related label sets, i.e. '[entailed, not-entailed]'.
For this reason, I named the dataset DNLI for _Diverse_ NLI, as in Liu et al 2020, instead of DNC.
This version of the dataset contains columns from the '*_data.json' and the '*_metadata.json' files available in the repo.
In the original repo, each data file has the following keys and values:
- 'context': The context sentence for the NLI pair. The context is already tokenized.
- 'hypothesis': The hypothesis sentence for the NLI pair. The hypothesis is already tokenized.
- 'label': The label for the NLI pair
- 'label-set': The set of possible labels for the specific NLI pair
- 'binary-label': A 'True' or 'False' label. See the paper for details on how we convert the 'label' into a binary label.
- 'split': This can be 'train', 'dev', or 'test'.
- 'type-of-inference': A string indicating what type of inference is tested in this example.
- 'pair-id': A unique integer id for the NLI pair. The 'pair-id' is used to find the corresponding metadata for any given NLI pair
while each metadata file has the following columns
- 'pair-id': A unique integer id for the NLI pair.
- 'corpus': The original corpus where this example came from.
- 'corpus-sent-id': The id of the sentence (or example) in the original dataset that we recast.
- 'corpus-license': The license for the data from the original dataset.
- 'creation-approach': Determines the method used to recast this example. Options are 'automatic', 'manual', or 'human-labeled'.
- 'misc': A dictionary of other relevant information. This is an optional field.
The files are merged on the 'pair-id' key. I do not include the 'misc' column as it is not essential for NLI.
NOTE: the label mapping is not the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them
with the following mapping '{"not-entailed": 0, "entailed": 1}'.
NOTE: some instances are present in multiple splits (matching performed by exact matching on "context", "hypothesis", and "label").
## Code to create the dataset
| [
"## Overview\nOriginal dataset here.\n\nThis dataset has been proposed in Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation.",
"## Dataset curation\nThis version of the dataset does not include the 'type-of-inference' \"KG\" as its label set is \n'[1, 2, 3, 4, 5]' while here we focus on NLI-related label sets, i.e. '[entailed, not-entailed]'.\nFor this reason, I named the dataset DNLI for _Diverse_ NLI, as in Liu et al 2020, instead of DNC.\n\nThis version of the dataset contains columns from the '*_data.json' and the '*_metadata.json' files available in the repo.\nIn the original repo, each data file has the following keys and values:\n\n- 'context': The context sentence for the NLI pair. The context is already tokenized.\n- 'hypothesis': The hypothesis sentence for the NLI pair. The hypothesis is already tokenized.\n- 'label': The label for the NLI pair\n- 'label-set': The set of possible labels for the specific NLI pair\n- 'binary-label': A 'True' or 'False' label. See the paper for details on how we convert the 'label' into a binary label.\n- 'split': This can be 'train', 'dev', or 'test'.\n- 'type-of-inference': A string indicating what type of inference is tested in this example.\n- 'pair-id': A unique integer id for the NLI pair. The 'pair-id' is used to find the corresponding metadata for any given NLI pair\n\nwhile each metadata file has the following columns\n\n- 'pair-id': A unique integer id for the NLI pair. \n- 'corpus': The original corpus where this example came from.\n- 'corpus-sent-id': The id of the sentence (or example) in the original dataset that we recast.\n- 'corpus-license': The license for the data from the original dataset.\n- 'creation-approach': Determines the method used to recast this example. Options are 'automatic', 'manual', or 'human-labeled'.\n- 'misc': A dictionary of other relevant information. This is an optional field.\n\nThe files are merged on the 'pair-id' key. I do not include the 'misc' column as it is not essential for NLI.\n\nNOTE: the label mapping is not the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them\nwith the following mapping '{\"not-entailed\": 0, \"entailed\": 1}'.\n\nNOTE: some instances are present in multiple splits (matching performed by exact matching on \"context\", \"hypothesis\", and \"label\").",
"## Code to create the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\nOriginal dataset here.\n\nThis dataset has been proposed in Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation.",
"## Dataset curation\nThis version of the dataset does not include the 'type-of-inference' \"KG\" as its label set is \n'[1, 2, 3, 4, 5]' while here we focus on NLI-related label sets, i.e. '[entailed, not-entailed]'.\nFor this reason, I named the dataset DNLI for _Diverse_ NLI, as in Liu et al 2020, instead of DNC.\n\nThis version of the dataset contains columns from the '*_data.json' and the '*_metadata.json' files available in the repo.\nIn the original repo, each data file has the following keys and values:\n\n- 'context': The context sentence for the NLI pair. The context is already tokenized.\n- 'hypothesis': The hypothesis sentence for the NLI pair. The hypothesis is already tokenized.\n- 'label': The label for the NLI pair\n- 'label-set': The set of possible labels for the specific NLI pair\n- 'binary-label': A 'True' or 'False' label. See the paper for details on how we convert the 'label' into a binary label.\n- 'split': This can be 'train', 'dev', or 'test'.\n- 'type-of-inference': A string indicating what type of inference is tested in this example.\n- 'pair-id': A unique integer id for the NLI pair. The 'pair-id' is used to find the corresponding metadata for any given NLI pair\n\nwhile each metadata file has the following columns\n\n- 'pair-id': A unique integer id for the NLI pair. \n- 'corpus': The original corpus where this example came from.\n- 'corpus-sent-id': The id of the sentence (or example) in the original dataset that we recast.\n- 'corpus-license': The license for the data from the original dataset.\n- 'creation-approach': Determines the method used to recast this example. Options are 'automatic', 'manual', or 'human-labeled'.\n- 'misc': A dictionary of other relevant information. This is an optional field.\n\nThe files are merged on the 'pair-id' key. I do not include the 'misc' column as it is not essential for NLI.\n\nNOTE: the label mapping is not the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them\nwith the following mapping '{\"not-entailed\": 0, \"entailed\": 1}'.\n\nNOTE: some instances are present in multiple splits (matching performed by exact matching on \"context\", \"hypothesis\", and \"label\").",
"## Code to create the dataset"
] |
bbf6138e30cff48af0b9fa46ed710f68400dde85 | This dataset contains IMDB Ratings of various movies of different languages. This dataset also contains the number of votes each movies received | Meena/imdb_ratings_table | [
"region:us"
] | 2022-04-25T07:59:04+00:00 | {} | 2022-04-25T08:25:49+00:00 | [] | [] | TAGS
#region-us
| This dataset contains IMDB Ratings of various movies of different languages. This dataset also contains the number of votes each movies received | [] | [
"TAGS\n#region-us \n"
] |
fbd6fcc5c3b8dc79ad26eaced52d7f04c6fea6d7 | ## Overview
Original dataset page [here](https://abhilasharavichander.github.io/NLI_StressTest/) and dataset available [here](https://drive.google.com/open?id=1faGA5pHdu5Co8rFhnXn-6jbBYC2R1dhw).
## Dataset curation
Added new column `label` with encoded labels with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
and the columns with parse information are dropped as they are not well formatted.
Also, the name of the file from which each instance comes is added in the column `dtype`.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
import json
from pathlib import Path
# load data
ds = {}
path = Path("<path to folder>")
for i in path.rglob("*.jsonl"):
print(i)
name = str(i).split("/")[0].lower()
dtype = str(i).split("/")[1].lower()
# read data
with i.open("r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
# select columns
df = df.loc[:, ["sentence1", "sentence2", "gold_label"]]
# add file name as column
df["dtype"] = dtype
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds[name] = df
# cast to dataset
features = Features(
{
"sentence1": Value(dtype="string"),
"sentence2": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"dtype": Value(dtype="string"),
"gold_label": Value(dtype="string"),
}
)
ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()})
ds.push_to_hub("pietrolesci/stress_tests_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["sentence1", "sentence2", "label"],
how="inner",
).shape[0],
)
#> numerical_reasoning - negation: 0
#> numerical_reasoning - length_mismatch: 0
#> numerical_reasoning - spelling_error: 0
#> numerical_reasoning - word_overlap: 0
#> numerical_reasoning - antonym: 0
#> negation - length_mismatch: 0
#> negation - spelling_error: 0
#> negation - word_overlap: 0
#> negation - antonym: 0
#> length_mismatch - spelling_error: 0
#> length_mismatch - word_overlap: 0
#> length_mismatch - antonym: 0
#> spelling_error - word_overlap: 0
#> spelling_error - antonym: 0
#> word_overlap - antonym: 0
``` | pietrolesci/stress_tests_nli | [
"region:us"
] | 2022-04-25T08:21:50+00:00 | {} | 2022-04-25T08:32:28+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Original dataset page here and dataset available here.
## Dataset curation
Added new column 'label' with encoded labels with the following mapping
and the columns with parse information are dropped as they are not well formatted.
Also, the name of the file from which each instance comes is added in the column 'dtype'.
## Code to create the dataset
| [
"## Overview\n\nOriginal dataset page here and dataset available here.",
"## Dataset curation\nAdded new column 'label' with encoded labels with the following mapping\n\n\n\nand the columns with parse information are dropped as they are not well formatted.\n\nAlso, the name of the file from which each instance comes is added in the column 'dtype'.",
"## Code to create the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\n\nOriginal dataset page here and dataset available here.",
"## Dataset curation\nAdded new column 'label' with encoded labels with the following mapping\n\n\n\nand the columns with parse information are dropped as they are not well formatted.\n\nAlso, the name of the file from which each instance comes is added in the column 'dtype'.",
"## Code to create the dataset"
] |
8526f3a347c2d5760dc79a3dbe88134cc89c36b9 | ## Overview
Original dataset available [here](https://github.com/jimmycode/gen-debiased-nli#training-with-our-datasets).
```latex
@inproceedings{gen-debiased-nli-2022,
title = "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets",
author = "Wu, Yuxiang and
Gardner, Matt and
Stenetorp, Pontus and
Dasigi, Pradeep",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
publisher = "Association for Computational Linguistics",
}
```
## Dataset curation
No curation.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
import json
from pathlib import Path
# load data
path = Path("./")
ds = {}
for i in path.rglob("*.jsonl"):
print(i)
name = str(i).split(".")[0].lower().replace("-", "_")
with i.open("r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
ds[name] = df
# cast to dataset
features = Features(
{
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"type": Value(dtype="string"),
}
)
ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()})
ds.push_to_hub("pietrolesci/gen_debiased_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> mnli_seq_z - snli_z_aug: 0
#> mnli_seq_z - mnli_par_z: 477149
#> mnli_seq_z - snli_seq_z: 0
#> mnli_seq_z - mnli_z_aug: 333840
#> mnli_seq_z - snli_par_z: 0
#> snli_z_aug - mnli_par_z: 0
#> snli_z_aug - snli_seq_z: 506624
#> snli_z_aug - mnli_z_aug: 0
#> snli_z_aug - snli_par_z: 504910
#> mnli_par_z - snli_seq_z: 0
#> mnli_par_z - mnli_z_aug: 334960
#> mnli_par_z - snli_par_z: 0
#> snli_seq_z - mnli_z_aug: 0
#> snli_seq_z - snli_par_z: 583107
#> mnli_z_aug - snli_par_z: 0
``` | pietrolesci/gen_debiased_nli | [
"region:us"
] | 2022-04-25T08:35:37+00:00 | {} | 2022-04-25T08:49:52+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Original dataset available here.
## Dataset curation
No curation.
## Code to create the dataset
| [
"## Overview\nOriginal dataset available here.",
"## Dataset curation\nNo curation.",
"## Code to create the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\nOriginal dataset available here.",
"## Dataset curation\nNo curation.",
"## Code to create the dataset"
] |
48d27a285f1919f3f7e6cd53b6a07fb13a238efb | ## Overview
Original dataset available [here](https://github.com/krandiash/gpt3-nli). Debiased dataset generated with GPT-3.
## Dataset curation
All string columns are stripped. Labels are encoded with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features
import json
# load data
with open("data/dataset.jsonl", "r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
df.columns = df.columns.str.strip()
# fix dtypes
df["guid"] = df["guid"].astype(int)
for col in df.select_dtypes(object):
df[col] = df[col].str.strip()
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features(
{
"text_a": Value(dtype="string"),
"text_b": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"guid": Value(dtype="int64"),
}
)
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("pietrolesci/gpt3_nli", token="<token>")
``` | pietrolesci/gpt3_nli | [
"region:us"
] | 2022-04-25T08:49:23+00:00 | {} | 2022-04-25T09:17:45+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Original dataset available here. Debiased dataset generated with GPT-3.
## Dataset curation
All string columns are stripped. Labels are encoded with the following mapping
## Code to create the dataset
| [
"## Overview\nOriginal dataset available here. Debiased dataset generated with GPT-3.",
"## Dataset curation\nAll string columns are stripped. Labels are encoded with the following mapping",
"## Code to create the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\nOriginal dataset available here. Debiased dataset generated with GPT-3.",
"## Dataset curation\nAll string columns are stripped. Labels are encoded with the following mapping",
"## Code to create the dataset"
] |
cd38beddf8badad23b8224f515a35c5d53ae0a53 |
# Dataset Card for UK Selective Web Archive Classification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: http://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Public Domain Mark 1.0.
### Citation Information
[Needs More Information] | TheBritishLibrary/web_archive_classification | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"lam",
"region:us"
] | 2022-04-25T09:14:45+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "UK Selective Web Archive Classification Dataset", "tags": ["lam"]} | 2023-05-04T11:59:29+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #lam #region-us
|
# Dataset Card for UK Selective Web Archive Classification Dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: URL
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Creative Commons Public Domain Mark 1.0.
| [
"# Dataset Card for UK Selective Web Archive Classification Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archive\u0019s web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in \u0018.uk\u0019, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWA\u0019s larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: URL",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCreative Commons Public Domain Mark 1.0."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #lam #region-us \n",
"# Dataset Card for UK Selective Web Archive Classification Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archive\u0019s web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in \u0018.uk\u0019, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWA\u0019s larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: URL",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCreative Commons Public Domain Mark 1.0."
] |
429dde22805398bdd6cfece27284f53a44ed6e67 | ## Overview
Original dataset is available in the original [Github repo](https://github.com/tyliupku/nli-debiasing-datasets).
This dataset is a collection of NLI benchmarks constructed as described in the paper
[An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference](https://aclanthology.org/2020.conll-1.48/)
published at CoNLL 2020.
## Dataset curation
No specific curation for this dataset. Label encoding follows exactly what is reported in the paper by the authors.
Also, from the paper:
> _all the following datasets are collected based on the public available resources proposed by their authors, thus the experimental results in this paper are comparable to the numbers reported in the original papers and the other papers that use these datasets_
Most of the datasets included follow the custom 3-class NLI convention `{"entailment": 0, "neutral": 1, "contradiction": 2}`.
However, the following datasets have a particular label mapping
- `IS-SD`: `{"non-entailment": 0, "entailment": 1}`
- `LI_TS`: `{"non-contradiction": 0, "contradiction": 1}`
## Dataset structure
This benchmark dataset includes 10 adversarial datasets. To provide more insights on how the adversarial
datasets attack the models, the authors categorized them according to the bias(es) they test and they renamed
them accordingly. More details in section 2 of the paper.
A mapping with the original dataset names is provided below
| | Name | Original Name | Original Paper | Original Curation |
|---:|:-------|:-----------------------|:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | PI-CD | SNLI-Hard | [Gururangan et al. (2018)](https://aclanthology.org/N18-2017/) | SNLI test sets instances that cannot be correctly classified by a neural classifier (fastText) trained on only the hypothesis sentences. |
| 1 | PI-SP | MNLI-Hard | [Liu et al. (2020)](https://aclanthology.org/2020.lrec-1.846/) | MNLI-mismatched dev sets instances that cannot be correctly classified by surface patterns that are highly correlated with the labels. |
| 2 | IS-SD | HANS | [McCoy et al. (2019)](https://aclanthology.org/P19-1334/) | Dataset that tests lexical overlap, subsequence, and constituent heuristics between the hypothesis and premises sentences. |
| 3 | IS-CS | SoSwap-AddAMod | [Nie et al. (2019)](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016867) | Pairs of sentences whose logical relations cannot be extracted from lexical information alone. Premise are taken from SNLI dev set and modified. The original paper assigns a Lexically Misleading Scores (LMS) to each instance. Here, only the subset with LMS > 0.7 is reported. |
| 4 | LI-LI | Stress tests (antonym) | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) and [Glockner et al. (2018)](https://aclanthology.org/P18-2103/) | Merge of the 'antonym' category in Naik et al. (2018) (from MNLI matched and mismatched dev sets) and Glockner et al. (2018) (SNLI training set). |
| 5 | LI-TS | Created by the authors | Created by the authors | Swap the two sentences in the original MultiNLI mismatched dev sets. If the gold label is 'contradiction', the corresponding label in the swapped instance remains unchanged, otherwise it becomes 'non-contradicted'. |
| 6 | ST-WO | Word overlap | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Word overlap' category in Naik et al. (2018). |
| 7 | ST-NE | Negation | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Negation' category in Naik et al. (2018). |
| 8 | ST-LM | Length mismatch | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Length mismatch' category in Naik et al. (2018). |
| 9 | ST-SE | Spelling errors | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Spelling errors' category in Naik et al. (2018). |
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
Tri_dataset = ["IS_CS", "LI_LI", "PI_CD", "PI_SP", "ST_LM", "ST_NE", "ST_SE", "ST_WO"]
Ent_bin_dataset = ["IS_SD"]
Con_bin_dataset = ["LI_TS"]
# read data
with open("<path to file>/robust_nli.txt", encoding="utf-8", mode="r") as fl:
f = fl.read().strip().split("\n")
f = [eval(i) for i in f]
df = pd.DataFrame.from_dict(f)
# rename to map common names
df = df.rename(columns={"prem": "premise", "hypo": "hypothesis"})
# reorder columns
df = df.loc[:, ["idx", "split", "premise", "hypothesis", "label"]]
# create split-specific features
Tri_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
Ent_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]),
}
)
Con_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]),
}
)
# convert to datasets
dataset_splits = {}
for split in df["split"].unique():
print(split)
df_split = df.loc[df["split"] == split].copy()
if split in Tri_dataset:
df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds = Dataset.from_pandas(df_split, features=Tri_features)
elif split in Ent_bin_dataset:
df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1})
ds = Dataset.from_pandas(df_split, features=Ent_features)
elif split in Con_bin_dataset:
df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1})
ds = Dataset.from_pandas(df_split, features=Con_features)
else:
print("ERROR:", split)
dataset_splits[split] = ds
datasets = DatasetDict(dataset_splits)
datasets.push_to_hub("pietrolesci/robust_nli", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(datasets.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
datasets[i].to_pandas(),
datasets[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> PI_SP - ST_LM: 0
#> PI_SP - ST_NE: 0
#> PI_SP - IS_CS: 0
#> PI_SP - LI_TS: 1
#> PI_SP - LI_LI: 0
#> PI_SP - ST_SE: 0
#> PI_SP - PI_CD: 0
#> PI_SP - IS_SD: 0
#> PI_SP - ST_WO: 0
#> ST_LM - ST_NE: 0
#> ST_LM - IS_CS: 0
#> ST_LM - LI_TS: 0
#> ST_LM - LI_LI: 0
#> ST_LM - ST_SE: 0
#> ST_LM - PI_CD: 0
#> ST_LM - IS_SD: 0
#> ST_LM - ST_WO: 0
#> ST_NE - IS_CS: 0
#> ST_NE - LI_TS: 0
#> ST_NE - LI_LI: 0
#> ST_NE - ST_SE: 0
#> ST_NE - PI_CD: 0
#> ST_NE - IS_SD: 0
#> ST_NE - ST_WO: 0
#> IS_CS - LI_TS: 0
#> IS_CS - LI_LI: 0
#> IS_CS - ST_SE: 0
#> IS_CS - PI_CD: 0
#> IS_CS - IS_SD: 0
#> IS_CS - ST_WO: 0
#> LI_TS - LI_LI: 0
#> LI_TS - ST_SE: 0
#> LI_TS - PI_CD: 0
#> LI_TS - IS_SD: 0
#> LI_TS - ST_WO: 0
#> LI_LI - ST_SE: 0
#> LI_LI - PI_CD: 0
#> LI_LI - IS_SD: 0
#> LI_LI - ST_WO: 0
#> ST_SE - PI_CD: 0
#> ST_SE - IS_SD: 0
#> ST_SE - ST_WO: 0
#> PI_CD - IS_SD: 0
#> PI_CD - ST_WO: 0
#> IS_SD - ST_WO: 0
``` | pietrolesci/robust_nli | [
"region:us"
] | 2022-04-25T10:43:30+00:00 | {} | 2022-04-25T10:45:07+00:00 | [] | [] | TAGS
#region-us
| Overview
--------
Original dataset is available in the original Github repo.
This dataset is a collection of NLI benchmarks constructed as described in the paper
An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference
published at CoNLL 2020.
Dataset curation
----------------
No specific curation for this dataset. Label encoding follows exactly what is reported in the paper by the authors.
Also, from the paper:
>
> *all the following datasets are collected based on the public available resources proposed by their authors, thus the experimental results in this paper are comparable to the numbers reported in the original papers and the other papers that use these datasets*
>
>
>
Most of the datasets included follow the custom 3-class NLI convention '{"entailment": 0, "neutral": 1, "contradiction": 2}'.
However, the following datasets have a particular label mapping
* 'IS-SD': '{"non-entailment": 0, "entailment": 1}'
* 'LI\_TS': '{"non-contradiction": 0, "contradiction": 1}'
Dataset structure
-----------------
This benchmark dataset includes 10 adversarial datasets. To provide more insights on how the adversarial
datasets attack the models, the authors categorized them according to the bias(es) they test and they renamed
them accordingly. More details in section 2 of the paper.
A mapping with the original dataset names is provided below
Code to create the dataset
--------------------------
| [] | [
"TAGS\n#region-us \n"
] |
8ede2d7bf4531a7b210c793fe7b9e483b871c8f5 | This is part of `robust_NLI`but since there seems to be a bug when loading and downloading
`DatasetDict` containing datasets with different configurations, I loaded the datasets with
the differing configs as standalone datasets.
Issue here: [https://github.com/huggingface/datasets/issues/4211](https://github.com/huggingface/datasets/issues/4211) | pietrolesci/robust_nli_li_ts | [
"region:us"
] | 2022-04-25T10:48:57+00:00 | {} | 2022-04-25T10:49:51+00:00 | [] | [] | TAGS
#region-us
| This is part of 'robust_NLI'but since there seems to be a bug when loading and downloading
'DatasetDict' containing datasets with different configurations, I loaded the datasets with
the differing configs as standalone datasets.
Issue here: URL | [] | [
"TAGS\n#region-us \n"
] |
338d9797bb910381f7493343991c1055d425b9c4 | This is part of `robust_NLI`but since there seems to be a bug when loading and downloading
`DatasetDict` containing datasets with different configurations, I loaded the datasets with
the differing configs as standalone datasets.
Issue here: [https://github.com/huggingface/datasets/issues/4211](https://github.com/huggingface/datasets/issues/4211) | pietrolesci/robust_nli_is_sd | [
"region:us"
] | 2022-04-25T10:49:03+00:00 | {} | 2022-04-25T12:07:25+00:00 | [] | [] | TAGS
#region-us
| This is part of 'robust_NLI'but since there seems to be a bug when loading and downloading
'DatasetDict' containing datasets with different configurations, I loaded the datasets with
the differing configs as standalone datasets.
Issue here: URL | [] | [
"TAGS\n#region-us \n"
] |
c47716065f1f2076c39c806dd7007027342da502 | # Python Subreddit
Dataset containing data scraped from the [Python subreddit](https://www.reddit.com/r/python). | jamescalam/reddit-python | [
"region:us"
] | 2022-04-25T11:29:25+00:00 | {} | 2022-04-25T11:41:35+00:00 | [] | [] | TAGS
#region-us
| # Python Subreddit
Dataset containing data scraped from the Python subreddit. | [
"# Python Subreddit\n\nDataset containing data scraped from the Python subreddit."
] | [
"TAGS\n#region-us \n",
"# Python Subreddit\n\nDataset containing data scraped from the Python subreddit."
] |
2e7504f0d4a70d6bf0373a39767ecd2f85ae0d9f | # Pretokenized GitHub Code Dataset
## Dataset Description
This is a pretokenized version of the Python files of the [GitHub Code dataset](https://huggingface.co/datasets/lvwerra/github-code), that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this [repo](https://huggingface.co/lvwerra/codeparrot). Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include `ratio_char_token` which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.
### How to use it
To avoid downloading the whole dataset, you can make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("loubnabnl/tokenized-github-code-python", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{'input_ids': [504, 1639, 492,...,199, 504, 1639],
'ratio_char_token': 3.560888252148997
}
``` | loubnabnl/tokenized-github-code-python | [
"region:us"
] | 2022-04-25T11:34:38+00:00 | {} | 2022-04-27T23:13:55+00:00 | [] | [] | TAGS
#region-us
| # Pretokenized GitHub Code Dataset
## Dataset Description
This is a pretokenized version of the Python files of the GitHub Code dataset, that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this repo. Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include 'ratio_char_token' which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.
### How to use it
To avoid downloading the whole dataset, you can make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:
| [
"# Pretokenized GitHub Code Dataset",
"## Dataset Description\nThis is a pretokenized version of the Python files of the GitHub Code dataset, that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this repo. Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include 'ratio_char_token' which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.",
"### How to use it\nTo avoid downloading the whole dataset, you can make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:"
] | [
"TAGS\n#region-us \n",
"# Pretokenized GitHub Code Dataset",
"## Dataset Description\nThis is a pretokenized version of the Python files of the GitHub Code dataset, that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this repo. Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include 'ratio_char_token' which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.",
"### How to use it\nTo avoid downloading the whole dataset, you can make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:"
] |
8371e5cf43c3564daa1314ecf6086b58fcbf2178 | ## Overview
Original dataset available [here](https://github.com/sheng-z/JOCI/tree/master/data).
This dataset is the "full" JOCI dataset, which is the file named `joci.csv.zip`.
# Dataset curation
The following processing is applied,
- `label` column renamed to `original_label`
- creation of the `label` column using the following mapping, using common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/joci.py#L22-L27), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_joci.py#L7-L12))
```
{
0: "contradiction",
1: "contradiction",
2: "neutral",
3: "neutral",
4: "neutral",
5: "entailment",
}
```
- finally, converting this to the usual NLI classes, that is `{"entailment": 0, "neutral": 1, "contradiction": 2}`
## Code to create dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset
# read data
df = pd.read_csv("<path to folder>/joci.csv")
# column name to lower
df.columns = df.columns.str.lower()
# rename label column
df = df.rename(columns={"label": "original_label"})
# encode labels
df["label"] = df["original_label"].map({
0: "contradiction",
1: "contradiction",
2: "neutral",
3: "neutral",
4: "neutral",
5: "entailment",
})
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"context": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"original_label": Value(dtype="int32"),
"context_from": Value(dtype="string"),
"hypothesis_from": Value(dtype="string"),
"subset": Value(dtype="string"),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("joci", token="<token>")
```
| pietrolesci/joci | [
"region:us"
] | 2022-04-25T12:32:52+00:00 | {} | 2022-04-25T12:33:08+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Original dataset available here.
This dataset is the "full" JOCI dataset, which is the file named 'URL'.
# Dataset curation
The following processing is applied,
- 'label' column renamed to 'original_label'
- creation of the 'label' column using the following mapping, using common practices (1, 2)
- finally, converting this to the usual NLI classes, that is '{"entailment": 0, "neutral": 1, "contradiction": 2}'
## Code to create dataset
| [
"## Overview\n\nOriginal dataset available here. \nThis dataset is the \"full\" JOCI dataset, which is the file named 'URL'.",
"# Dataset curation\nThe following processing is applied,\n\n- 'label' column renamed to 'original_label'\n- creation of the 'label' column using the following mapping, using common practices (1, 2)\n\n\n\n- finally, converting this to the usual NLI classes, that is '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'",
"## Code to create dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\n\nOriginal dataset available here. \nThis dataset is the \"full\" JOCI dataset, which is the file named 'URL'.",
"# Dataset curation\nThe following processing is applied,\n\n- 'label' column renamed to 'original_label'\n- creation of the 'label' column using the following mapping, using common practices (1, 2)\n\n\n\n- finally, converting this to the usual NLI classes, that is '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'",
"## Code to create dataset"
] |
82b6583887562130331c99bba2c994b44eae310f | ## Overview
Proposed by
```latex
@InProceedings{glockner_acl18,
author = {Glockner, Max and Shwartz, Vered and Goldberg, Yoav},
title = {Breaking NLI Systems with Sentences that Require Simple Lexical Inferences},
booktitle = {The 56th Annual Meeting of the Association for Computational Linguistics (ACL)},
month = {July},
year = {2018},
address = {Melbourne, Australia}
}
```
Original dataset available [here](https://github.com/BIU-NLP/Breaking_NLI).
## Dataset curation
Labels encoded with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}`
and made available in the `label` column.
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, Sequence
# load data
with open("<path to folder>/dataset.jsonl", "r") as fl:
data = fl.read().split("\n")
df = pd.DataFrame([eval(i) for i in data if len(i) > 0])
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"sentence1": Value(dtype="string", id=None),
"category": Value(dtype="string", id=None),
"gold_label": Value(dtype="string", id=None),
"annotator_labels": Sequence(feature=Value(dtype="string", id=None), length=3),
"pairID": Value(dtype="int32", id=None),
"sentence2": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("breaking_nli", token="<token>", split="all")
``` | pietrolesci/breaking_nli | [
"region:us"
] | 2022-04-25T12:36:48+00:00 | {} | 2022-04-25T12:37:23+00:00 | [] | [] | TAGS
#region-us
| ## Overview
Proposed by
Original dataset available here.
## Dataset curation
Labels encoded with the following mapping '{"entailment": 0, "neutral": 1, "contradiction": 2}'
and made available in the 'label' column.
## Code to create the dataset
| [
"## Overview\n\nProposed by\n\n\nOriginal dataset available here.",
"## Dataset curation\nLabels encoded with the following mapping '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'\nand made available in the 'label' column.",
"## Code to create the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\n\nProposed by\n\n\nOriginal dataset available here.",
"## Dataset curation\nLabels encoded with the following mapping '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'\nand made available in the 'label' column.",
"## Code to create the dataset"
] |
e39c4231c5c09a3ee1d3fd9e9bdfab466a6254f6 | ## Overview
Original dataset available [here](https://people.ict.usc.edu/~gordon/copa.html).
Current dataset extracted from [this repo](https://github.com/felipessalvatore/NLI_datasets).
This is the "full" dataset.
# Curation
Same curation as the one applied in [this repo](https://github.com/felipessalvatore/NLI_datasets), that is
from the original COPA format:
|premise | choice1 | choice2 | label |
|---|---|---|---|
|My body cast a shadow over the grass | The sun was rising | The grass was cut | 0 |
to the NLI format:
| premise | hypothesis | label |
|---|---|---|
| My body cast a shadow over the grass | The sun was rising| entailment |
| My body cast a shadow over the grass | The grass was cut | not_entailment |
Also, the labels are encoded with the following mapping `{"not_entailment": 0, "entailment": 1}`
## Code to generate dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
from pathlib import Path
# read data
path = Path("./nli_datasets")
datasets = {}
for dataset_path in path.iterdir():
datasets[dataset_path.name] = {}
for name in dataset_path.iterdir():
df = pd.read_csv(name)
datasets[dataset_path.name][name.name.split(".")[0]] = df
# merge all splits
df = pd.concat(list(datasets["copa"].values()))
# encode labels
df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("copa_nli", token="<token>")
``` | pietrolesci/copa_nli | [
"region:us"
] | 2022-04-25T12:46:42+00:00 | {} | 2022-04-25T12:47:10+00:00 | [] | [] | TAGS
#region-us
| Overview
--------
Original dataset available here.
Current dataset extracted from this repo.
This is the "full" dataset.
Curation
========
Same curation as the one applied in this repo, that is
from the original COPA format:
to the NLI format:
premise: My body cast a shadow over the grass, hypothesis: The sun was rising, label: entailment
premise: My body cast a shadow over the grass, hypothesis: The grass was cut, label: not\_entailment
Also, the labels are encoded with the following mapping '{"not\_entailment": 0, "entailment": 1}'
Code to generate dataset
------------------------
| [] | [
"TAGS\n#region-us \n"
] |
bbafadc05d7fdc9c668653a5e81bb034a99af3d9 |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# Dataset Card for HiNER-original
[](https://twitter.com/cfiltnlp)
[](https://twitter.com/PeopleCentredAI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/cfiltnlp/HiNER
- **Repository:** https://github.com/cfiltnlp/HiNER
- **Paper:** https://arxiv.org/abs/2204.13743
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-original
- **Point of Contact:** Rudra Murthy V
### Dataset Summary
This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.
**Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
Hindi
## Dataset Structure
### Data Instances
{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग','के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}
### Data Fields
- `id`: The ID value of the data point.
- `tokens`: Raw tokens in the dataset.
- `ner_tags`: the NER tags for this dataset.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| original | 76025 | 10861 | 21722|
| collapsed | 76025 | 10861 | 21722|
## About
This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743).
### Recent Updates
* Version 0.0.5: HiNER initial release
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the original dataset with all the tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-original')
```
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-collapsed')
```
However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
## Model(s)
Our best performing models are hosted on the HuggingFace models repository:
1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large)
2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large)
## Dataset Creation
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```latex
@misc{https://doi.org/10.48550/arxiv.2204.13743,
doi = {10.48550/ARXIV.2204.13743},
url = {https://arxiv.org/abs/2204.13743},
author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HiNER: A Large Hindi Named Entity Recognition Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | cfilt/HiNER-original | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:hi",
"license:cc-by-sa-4.0",
"arxiv:2204.13743",
"region:us"
] | 2022-04-25T12:55:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["hi"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "hiner-original-1", "pretty_name": "HiNER - Large Hindi Named Entity Recognition dataset"} | 2023-03-07T16:42:05+00:00 | [
"2204.13743"
] | [
"hi"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Hindi #license-cc-by-sa-4.0 #arxiv-2204.13743 #region-us
| 
Dataset Card for HiNER-original
===============================
 published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available here.
### Recent Updates
* Version 0.0.5: HiNER initial release
Usage
-----
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
To use the original dataset with all the tags, please use:
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:
However, the CoNLL format dataset files can also be found on this Git repository under the data folder.
Model(s)
--------
Our best performing models are hosted on the HuggingFace models repository:
1. HiNER-Collapsed-XLM-R
2. HiNER-Original-XLM-R
Dataset Creation
----------------
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
| [
"### Dataset Summary\n\n\nThis dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.\n\n\nNote: The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.",
"### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition",
"### Languages\n\n\nHindi\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग','के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner\\_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}",
"### Data Fields\n\n\n* 'id': The ID value of the data point.\n* 'tokens': Raw tokens in the dataset.\n* 'ner\\_tags': the NER tags for this dataset.",
"### Data Splits\n\n\n\nAbout\n-----\n\n\nThis repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available here.",
"### Recent Updates\n\n\n* Version 0.0.5: HiNER initial release\n\n\nUsage\n-----\n\n\nYou should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:\n\n\nTo use the original dataset with all the tags, please use: \n\n\n\nTo use the collapsed dataset with only PER, LOC, and ORG tags, please use: \n\n\n\nHowever, the CoNLL format dataset files can also be found on this Git repository under the data folder.\n\n\nModel(s)\n--------\n\n\nOur best performing models are hosted on the HuggingFace models repository:\n\n\n1. HiNER-Collapsed-XLM-R\n2. HiNER-Original-XLM-R\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi",
"#### Who are the source language producers?\n\n\nVarious Government of India webpages",
"### Annotations",
"#### Annotation process\n\n\nThis dataset was manually annotated by a single annotator of a long span of time.",
"#### Who are the annotators?\n\n\nPallab Bhattacharjee",
"### Personal and Sensitive Information\n\n\nWe ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.",
"### Discussion of Biases\n\n\nAny biases contained in the data released by the Indian government are bound to be present in our data.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nPallab Bhattacharjee",
"### Licensing Information\n\n\nCC-BY-SA 4.0"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Hindi #license-cc-by-sa-4.0 #arxiv-2204.13743 #region-us \n",
"### Dataset Summary\n\n\nThis dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.\n\n\nNote: The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.",
"### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition",
"### Languages\n\n\nHindi\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग','के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner\\_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}",
"### Data Fields\n\n\n* 'id': The ID value of the data point.\n* 'tokens': Raw tokens in the dataset.\n* 'ner\\_tags': the NER tags for this dataset.",
"### Data Splits\n\n\n\nAbout\n-----\n\n\nThis repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available here.",
"### Recent Updates\n\n\n* Version 0.0.5: HiNER initial release\n\n\nUsage\n-----\n\n\nYou should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:\n\n\nTo use the original dataset with all the tags, please use: \n\n\n\nTo use the collapsed dataset with only PER, LOC, and ORG tags, please use: \n\n\n\nHowever, the CoNLL format dataset files can also be found on this Git repository under the data folder.\n\n\nModel(s)\n--------\n\n\nOur best performing models are hosted on the HuggingFace models repository:\n\n\n1. HiNER-Collapsed-XLM-R\n2. HiNER-Original-XLM-R\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nHiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi",
"#### Who are the source language producers?\n\n\nVarious Government of India webpages",
"### Annotations",
"#### Annotation process\n\n\nThis dataset was manually annotated by a single annotator of a long span of time.",
"#### Who are the annotators?\n\n\nPallab Bhattacharjee",
"### Personal and Sensitive Information\n\n\nWe ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.",
"### Discussion of Biases\n\n\nAny biases contained in the data released by the Indian government are bound to be present in our data.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nPallab Bhattacharjee",
"### Licensing Information\n\n\nCC-BY-SA 4.0"
] |
f88b0c931a28aac0824a988e60b76e5a83fd0da3 | annotations_creators:
- annotation
languages:
- pt-br
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- ner
# Dataset Card for c_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
C corpus is a set of annotated data in portuguese for the recognition of named entities, being the extension of the UlyssesNER-Br corpus.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Named Entity Recognition that aims to identify all named entities such as person names, locations, among others, in a text.
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | rosimeirecosta/c_corpus | [
"region:us"
] | 2022-04-25T18:49:57+00:00 | {} | 2022-04-25T19:03:08+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- annotation
languages:
- pt-br
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- ner
# Dataset Card for c_corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
C corpus is a set of annotated data in portuguese for the recognition of named entities, being the extension of the UlyssesNER-Br corpus.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Named Entity Recognition that aims to identify all named entities such as person names, locations, among others, in a text.
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for c_corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\n\nC corpus is a set of annotated data in portuguese for the recognition of named entities, being the extension of the UlyssesNER-Br corpus.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Named Entity Recognition that aims to identify all named entities such as person names, locations, among others, in a text.",
"### Languages\n\nBrazilian Portuguese",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for c_corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\n\nC corpus is a set of annotated data in portuguese for the recognition of named entities, being the extension of the UlyssesNER-Br corpus.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Named Entity Recognition that aims to identify all named entities such as person names, locations, among others, in a text.",
"### Languages\n\nBrazilian Portuguese",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
709fd56c19915e82eafc9bc39780e078daee5e00 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
List of lottiefiles uri for research purposes
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | AmirulOm/lottie-urls | [
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:n<1K",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-04-25T21:45:19+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": [], "license": ["unknown"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "lottie-uri"} | 2022-10-25T09:12:14+00:00 | [] | [] | TAGS
#task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-n<1K #source_datasets-original #license-unknown #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
List of lottiefiles uri for research purposes
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nList of lottiefiles uri for research purposes",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-n<1K #source_datasets-original #license-unknown #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nList of lottiefiles uri for research purposes",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
5d2ef1db3b12764224290882c360f966bdbb8aeb | # AutoTrain Dataset for project: isear_bert
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project isear_bert.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "I was going to go on a vacation to Texas this summer but was \nunable to go because of registration.",
"target": 5
},
{
"text": "When someone whom I considered my friend, without telling me he \nwas annoyed, proceeded to ignore m[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=7, names=['anger', 'disgust', 'fear', 'guilt', 'joy', 'sadness', 'shame'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6008 |
| valid | 1507 |
| crcb/autotrain-data-isear_bert | [
"task_categories:text-classification",
"region:us"
] | 2022-04-26T02:06:30+00:00 | {"task_categories": ["text-classification"]} | 2022-04-26T02:10:34+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: isear\_bert
==========================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project isear\_bert.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
ac97fe2b8719890567bea1fbcf9a5b22594bf88b | Dataset for API: https://github.com/eleldar/Translation | eleldar/different_sub_normal_datasets | [
"region:us"
] | 2022-04-26T05:32:15+00:00 | {} | 2022-06-16T10:19:15+00:00 | [] | [] | TAGS
#region-us
| Dataset for API: URL | [] | [
"TAGS\n#region-us \n"
] |
06fbc6482522edcba63c38da575269369694c6f2 |
Original from https://gitlab.inria.fr/french-crows-pairs/acl-2022-paper-data-and-code/-/tree/main/.
# Data Statement for CrowS-Pairs-fr
> **How to use this document:**
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: https://www.aclweb.org/anthology/Q18-1041/ .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "DATASTATEMENT.md". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Crows-Pairs-fr
Citation (if available): Névéol A, Dupont Y, Bezançon J, Fort K. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022
Data set developer(s): Aurélie Névéol, Yoann Dupont, Julien Bezançon, Karën Fort
Data statement author(s): Aurélie Névéol, Yoann Dupont
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
The French part of the corpus was built by first translating the original 1,508 sentence pairs of the English corpus into French.
We then adapted the crowdsourcing method described by [Nangia et al. (2020)](https://arxiv.org/pdf/2010.00133) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC [(Fiumara et al., 2020)](https://www.aclweb.org/anthology/2020.cllrd-1.1.pdf), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (https://languagearc.com/projects/19) to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: fr-FR
* Language variety description: French spoken by native French people from metropolitan France.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
N/A
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through calls for volunteers posted to social media and mailing lists in the French research community.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in French, they might be upsetting.
Along these stereotyped statements are paired anti-stereotyped statements.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
Examples were gathered using the LanguageArc site and by creating a dedicated project: https://languagearc.com/projects/19
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: [[email protected]](mailto:[email protected]). This document template is licensed as [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the [2020 LREC workshop on Data Statements](https://sites.google.com/uw.edu/data-statements-for-nlp/), by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski.
| BigScienceBiasEval/crows_pairs_multilingual | [
"language:en",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2010.00133",
"region:us"
] | 2022-04-26T06:49:31+00:00 | {"language": ["en", "fr"], "license": "cc-by-sa-4.0"} | 2024-01-14T11:46:09+00:00 | [
"2010.00133"
] | [
"en",
"fr"
] | TAGS
#language-English #language-French #license-cc-by-sa-4.0 #arxiv-2010.00133 #region-us
|
Original from URL
# Data Statement for CrowS-Pairs-fr
> How to use this document:
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: URL .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "URL". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Crows-Pairs-fr
Citation (if available): Névéol A, Dupont Y, Bezançon J, Fort K. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022
Data set developer(s): Aurélie Névéol, Yoann Dupont, Julien Bezançon, Karën Fort
Data statement author(s): Aurélie Névéol, Yoann Dupont
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
The French part of the corpus was built by first translating the original 1,508 sentence pairs of the English corpus into French.
We then adapted the crowdsourcing method described by Nangia et al. (2020) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC (Fiumara et al., 2020), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (URL to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: fr-FR
* Language variety description: French spoken by native French people from metropolitan France.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
N/A
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through calls for volunteers posted to social media and mailing lists in the French research community.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in French, they might be upsetting.
Along these stereotyped statements are paired anti-stereotyped statements.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
Examples were gathered using the LanguageArc site and by creating a dedicated project: URL
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski.
| [
"# Data Statement for CrowS-Pairs-fr\n\n> How to use this document:\n> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.\n\n> For full details, the best source is the original Data Statements paper, here: URL .\n\n> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as \"URL\". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.\n\n> Only blockquoted content should be deleted; the final about statement should be left intact.\n\nData set name: Crows-Pairs-fr\n\nCitation (if available): Névéol A, Dupont Y, Bezançon J, Fort K. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022\n\nData set developer(s): Aurélie Névéol, Yoann Dupont, Julien Bezançon, Karën Fort\n\nData statement author(s): Aurélie Névéol, Yoann Dupont\n\nOthers who contributed to this document: N/A\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## A. CURATION RATIONALE \n\n> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.\n\nThe French part of the corpus was built by first translating the original 1,508 sentence pairs of the English corpus into French.\n\nWe then adapted the crowdsourcing method described by Nangia et al. (2020) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC (Fiumara et al., 2020), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (URL to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way.",
"## B. LANGUAGE VARIETY/VARIETIES\n\n> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., \"English as spoken in Palo Alto, California\", or \"Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin\").\n\n* BCP-47 language tags: fr-FR\n* Language variety description: French spoken by native French people from metropolitan France.",
"## C. CONTRIBUTOR DEMOGRAPHIC\n> ## C. SPEAKER DEMOGRAPHIC\n\n> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: \n\nN/A",
"## D. ANNOTATOR DEMOGRAPHIC\n\n> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:\n\nParticipants to the collection project were recruited through calls for volunteers posted to social media and mailing lists in the French research community.",
"## E. SPEECH SITUATION\n\nN/A",
"## F. TEXT CHARACTERISTICS\n\n> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.\n\nCollected data is a collection of offensive stereotyped statements in French, they might be upsetting.\n\nAlong these stereotyped statements are paired anti-stereotyped statements.",
"## G. RECORDING QUALITY\n\nN/A",
"## H. OTHER\n\n> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.",
"## I. PROVENANCE APPENDIX\n\nExamples were gathered using the LanguageArc site and by creating a dedicated project: URL",
"## About this document\n\nA data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.\n\nData Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.\n\nThis version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski."
] | [
"TAGS\n#language-English #language-French #license-cc-by-sa-4.0 #arxiv-2010.00133 #region-us \n",
"# Data Statement for CrowS-Pairs-fr\n\n> How to use this document:\n> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.\n\n> For full details, the best source is the original Data Statements paper, here: URL .\n\n> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as \"URL\". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.\n\n> Only blockquoted content should be deleted; the final about statement should be left intact.\n\nData set name: Crows-Pairs-fr\n\nCitation (if available): Névéol A, Dupont Y, Bezançon J, Fort K. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022\n\nData set developer(s): Aurélie Névéol, Yoann Dupont, Julien Bezançon, Karën Fort\n\nData statement author(s): Aurélie Névéol, Yoann Dupont\n\nOthers who contributed to this document: N/A\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## A. CURATION RATIONALE \n\n> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.\n\nThe French part of the corpus was built by first translating the original 1,508 sentence pairs of the English corpus into French.\n\nWe then adapted the crowdsourcing method described by Nangia et al. (2020) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC (Fiumara et al., 2020), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (URL to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way.",
"## B. LANGUAGE VARIETY/VARIETIES\n\n> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., \"English as spoken in Palo Alto, California\", or \"Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin\").\n\n* BCP-47 language tags: fr-FR\n* Language variety description: French spoken by native French people from metropolitan France.",
"## C. CONTRIBUTOR DEMOGRAPHIC\n> ## C. SPEAKER DEMOGRAPHIC\n\n> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: \n\nN/A",
"## D. ANNOTATOR DEMOGRAPHIC\n\n> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:\n\nParticipants to the collection project were recruited through calls for volunteers posted to social media and mailing lists in the French research community.",
"## E. SPEECH SITUATION\n\nN/A",
"## F. TEXT CHARACTERISTICS\n\n> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.\n\nCollected data is a collection of offensive stereotyped statements in French, they might be upsetting.\n\nAlong these stereotyped statements are paired anti-stereotyped statements.",
"## G. RECORDING QUALITY\n\nN/A",
"## H. OTHER\n\n> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.",
"## I. PROVENANCE APPENDIX\n\nExamples were gathered using the LanguageArc site and by creating a dedicated project: URL",
"## About this document\n\nA data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.\n\nData Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.\n\nThis version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski."
] |
5e93d44a6d6fb1fe35c41df7af170a8618b23e70 |
# Dataset Card for CrosswordQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Albert Xu](mailto:[email protected]) and [Eshaan Pathak](mailto:[email protected])
### Dataset Summary
The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
```
{
"id": 0,
"clue": "Clean-up target",
"answer": "mess"
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | albertxu/CrosswordQA | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:unknown",
"region:us"
] | 2022-04-26T07:05:14+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"]} | 2022-10-29T22:45:36+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-unknown #region-us
|
# Dataset Card for CrosswordQA
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard:
- Point of Contact: Albert Xu and Eshaan Pathak
### Dataset Summary
The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for CrosswordQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact: Albert Xu and Eshaan Pathak",
"### Dataset Summary\n\nThe CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-unknown #region-us \n",
"# Dataset Card for CrosswordQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact: Albert Xu and Eshaan Pathak",
"### Dataset Summary\n\nThe CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
01020533529fc1cda0af7d99231eb96e7837f883 |
# Dataset Card for HuffPost
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://www.kaggle.com/datasets/rmisra/news-category-dataset/metadata**
### Dataset Summary
A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
license: cc0-1.0
### Citation Information
```
@book{book,
author = {Misra, Rishabh and Grover, Jigyasa},
year = {2021},
month = {01},
pages = {},
title = {Sculpting Data for ML: The first act of Machine Learning},
isbn = {978-0-578-83125-1}
}
@dataset{dataset,
author = {Misra, Rishabh},
year = {2018},
month = {06},
pages = {},
title = {News Category Dataset},
doi = {10.13140/RG.2.2.20331.18729}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| khalidalt/HuffPost | [
"license:cc0-1.0",
"region:us"
] | 2022-04-26T08:32:57+00:00 | {"license": "cc0-1.0"} | 2023-05-19T17:35:08+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
|
# Dataset Card for HuffPost
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:URL
### Dataset Summary
A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
license: cc0-1.0
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for HuffPost",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:URL",
"### Dataset Summary\n\nA dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nlicense: cc0-1.0",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#license-cc0-1.0 #region-us \n",
"# Dataset Card for HuffPost",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:URL",
"### Dataset Summary\n\nA dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nlicense: cc0-1.0",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
726a7cb5d4eab90c9035bd55b7bde3018c3bd06b |
### Dataset Summary
Kinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).
In total it contains 36,591 reviews from July 2004 to November 2012.
With following distribution along the 3-point sentiment scale:
- Good: 27,264;
- Bad: 4,751;
- Neutral: 4,576.
### Data Fields
Each sample contains the following fields:
- **part**: rank list top250 or bottom100;
- **movie_name**;
- **review_id**;
- **author**: review author;
- **date**: date of a review;
- **title**: review title;
- **grade3**: sentiment score Good, Bad or Neutral;
- **grade10**: sentiment score on a 10-point scale parsed from text;
- **content**: review text.
### Python
```python3
import pandas as pd
df = pd.read_json('kinopoisk.jsonl', lines=True)
df.sample(5)
```
### Citation
```
@article{blinov2013research,
title={Research of lexical approach and machine learning methods for sentiment analysis},
author={Blinov, PD and Klekovkina, Maria and Kotelnikov, Eugeny and Pestov, Oleg},
journal={Computational Linguistics and Intellectual Technologies},
volume={2},
number={12},
pages={48--58},
year={2013}
}
```
| blinoff/kinopoisk | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | 2022-04-26T08:47:00+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Kinopoisk"} | 2022-10-23T15:51:58+00:00 | [] | [
"ru"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-Russian #region-us
|
### Dataset Summary
Kinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).
In total it contains 36,591 reviews from July 2004 to November 2012.
With following distribution along the 3-point sentiment scale:
- Good: 27,264;
- Bad: 4,751;
- Neutral: 4,576.
### Data Fields
Each sample contains the following fields:
- part: rank list top250 or bottom100;
- movie_name;
- review_id;
- author: review author;
- date: date of a review;
- title: review title;
- grade3: sentiment score Good, Bad or Neutral;
- grade10: sentiment score on a 10-point scale parsed from text;
- content: review text.
### Python
| [
"### Dataset Summary\n\nKinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).\n\nIn total it contains 36,591 reviews from July 2004 to November 2012.\n\nWith following distribution along the 3-point sentiment scale:\n- Good: 27,264;\n- Bad: 4,751;\n- Neutral: 4,576.",
"### Data Fields\n\nEach sample contains the following fields:\n- part: rank list top250 or bottom100;\n- movie_name;\n- review_id;\n- author: review author;\n- date: date of a review;\n- title: review title;\n- grade3: sentiment score Good, Bad or Neutral;\n- grade10: sentiment score on a 10-point scale parsed from text;\n- content: review text.",
"### Python"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-Russian #region-us \n",
"### Dataset Summary\n\nKinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).\n\nIn total it contains 36,591 reviews from July 2004 to November 2012.\n\nWith following distribution along the 3-point sentiment scale:\n- Good: 27,264;\n- Bad: 4,751;\n- Neutral: 4,576.",
"### Data Fields\n\nEach sample contains the following fields:\n- part: rank list top250 or bottom100;\n- movie_name;\n- review_id;\n- author: review author;\n- date: date of a review;\n- title: review title;\n- grade3: sentiment score Good, Bad or Neutral;\n- grade10: sentiment score on a 10-point scale parsed from text;\n- content: review text.",
"### Python"
] |
dec13d12c9fbda58367264342cba2376364aa2fe |
# Dataset Card for CANLI
### Dataset Summary
[CANLI: The Chinese Causative-Passive Homonymy Disambiguation: an Adversarial Dataset for NLI and a Probing Task](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.460.pdf)
The disambiguation of causative-passive homonymy (CPH) is potentially tricky for machines, as the causative and the passive
are not distinguished by the sentences syntactic structure. By transforming CPH disambiguation to a challenging natural
language inference (NLI) task, we present the first Chinese Adversarial NLI challenge set (CANLI). We show that the pretrained
transformer model RoBERTa, fine-tuned on an existing large-scale Chinese NLI benchmark dataset, performs poorly on CANLI.
We also employ Word Sense Disambiguation as a probing task to investigate to what extent the CPH feature is captured in
the models internal representation. We find that the models performance on CANLI does not correspond to its internal
representation of CPH, which is the crucial linguistic ability central to the CANLI dataset.
### Languages
Chinese Mandarin
# Citation Information
@inproceedings{xu-markert-2022-chinese,
title = "The {C}hinese Causative-Passive Homonymy Disambiguation: an adversarial Dataset for {NLI} and a Probing Task",
author = "Xu, Shanshan and Markert, Katja",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.460",
pages = "4316--4323",
}
| sxu/CANLI | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:cn",
"license:afl-3.0",
"region:us"
] | 2022-04-26T12:31:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["cn"], "license": "afl-3.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"]} | 2023-01-06T13:23:58+00:00 | [] | [
"cn"
] | TAGS
#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #language-cn #license-afl-3.0 #region-us
|
# Dataset Card for CANLI
### Dataset Summary
CANLI: The Chinese Causative-Passive Homonymy Disambiguation: an Adversarial Dataset for NLI and a Probing Task
The disambiguation of causative-passive homonymy (CPH) is potentially tricky for machines, as the causative and the passive
are not distinguished by the sentences syntactic structure. By transforming CPH disambiguation to a challenging natural
language inference (NLI) task, we present the first Chinese Adversarial NLI challenge set (CANLI). We show that the pretrained
transformer model RoBERTa, fine-tuned on an existing large-scale Chinese NLI benchmark dataset, performs poorly on CANLI.
We also employ Word Sense Disambiguation as a probing task to investigate to what extent the CPH feature is captured in
the models internal representation. We find that the models performance on CANLI does not correspond to its internal
representation of CPH, which is the crucial linguistic ability central to the CANLI dataset.
### Languages
Chinese Mandarin
@inproceedings{xu-markert-2022-chinese,
title = "The {C}hinese Causative-Passive Homonymy Disambiguation: an adversarial Dataset for {NLI} and a Probing Task",
author = "Xu, Shanshan and Markert, Katja",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "URL
pages = "4316--4323",
}
| [
"# Dataset Card for CANLI",
"### Dataset Summary\nCANLI: The Chinese Causative-Passive Homonymy Disambiguation: an Adversarial Dataset for NLI and a Probing Task\n\nThe disambiguation of causative-passive homonymy (CPH) is potentially tricky for machines, as the causative and the passive\nare not distinguished by the sentences\u0019 syntactic structure. By transforming CPH disambiguation to a challenging natural\nlanguage inference (NLI) task, we present the first Chinese Adversarial NLI challenge set (CANLI). We show that the pretrained\ntransformer model RoBERTa, fine-tuned on an existing large-scale Chinese NLI benchmark dataset, performs poorly on CANLI.\nWe also employ Word Sense Disambiguation as a probing task to investigate to what extent the CPH feature is captured in\nthe model\u0019s internal representation. We find that the model\u0019s performance on CANLI does not correspond to its internal\nrepresentation of CPH, which is the crucial linguistic ability central to the CANLI dataset.",
"### Languages\nChinese Mandarin\n\n\n\n @inproceedings{xu-markert-2022-chinese,\n title = \"The {C}hinese Causative-Passive Homonymy Disambiguation: an adversarial Dataset for {NLI} and a Probing Task\",\n author = \"Xu, Shanshan and Markert, Katja\",\n booktitle = \"Proceedings of the Thirteenth Language Resources and Evaluation Conference\",\n month = jun,\n year = \"2022\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"URL\n pages = \"4316--4323\",\n}"
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #language-cn #license-afl-3.0 #region-us \n",
"# Dataset Card for CANLI",
"### Dataset Summary\nCANLI: The Chinese Causative-Passive Homonymy Disambiguation: an Adversarial Dataset for NLI and a Probing Task\n\nThe disambiguation of causative-passive homonymy (CPH) is potentially tricky for machines, as the causative and the passive\nare not distinguished by the sentences\u0019 syntactic structure. By transforming CPH disambiguation to a challenging natural\nlanguage inference (NLI) task, we present the first Chinese Adversarial NLI challenge set (CANLI). We show that the pretrained\ntransformer model RoBERTa, fine-tuned on an existing large-scale Chinese NLI benchmark dataset, performs poorly on CANLI.\nWe also employ Word Sense Disambiguation as a probing task to investigate to what extent the CPH feature is captured in\nthe model\u0019s internal representation. We find that the model\u0019s performance on CANLI does not correspond to its internal\nrepresentation of CPH, which is the crucial linguistic ability central to the CANLI dataset.",
"### Languages\nChinese Mandarin\n\n\n\n @inproceedings{xu-markert-2022-chinese,\n title = \"The {C}hinese Causative-Passive Homonymy Disambiguation: an adversarial Dataset for {NLI} and a Probing Task\",\n author = \"Xu, Shanshan and Markert, Katja\",\n booktitle = \"Proceedings of the Thirteenth Language Resources and Evaluation Conference\",\n month = jun,\n year = \"2022\",\n address = \"Marseille, France\",\n publisher = \"European Language Resources Association\",\n url = \"URL\n pages = \"4316--4323\",\n}"
] |
7455d89e3da5e569b49d6ae1005fd52e89eb5087 |
# Dataset Card for "scientific_papers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/armancohan/long-summarization](https://github.com/armancohan/long-summarization)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 8591.93 MB
- **Size of the generated dataset:** 9622.19 MB
- **Total amount of disk used:** 18214.12 MB
### Dataset Summary
Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, pagragraphs seperated by "/n".
- abstract: the abstract of the document, pagragraphs seperated by "/n".
- section_names: titles of sections, seperated by "/n".
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### arxiv
- **Size of downloaded dataset files:** 4295.97 MB
- **Size of the generated dataset:** 7231.70 MB
- **Total amount of disk used:** 11527.66 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
```
#### pubmed
- **Size of downloaded dataset files:** 4295.97 MB
- **Size of the generated dataset:** 2390.49 MB
- **Total amount of disk used:** 6686.46 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
"article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
"section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### arxiv
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
#### pubmed
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|arxiv |203037| 6436|6440|
|pubmed|119924| 6633|6658|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
| ENM/dataset-prueba | [
"language:en",
"region:us"
] | 2022-04-26T17:11:02+00:00 | {"language": ["en"], "pretty_name": "ScientificPapers"} | 2022-10-25T09:12:20+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| Dataset Card for "scientific\_papers"
=====================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Point of Contact:
* Size of downloaded dataset files: 8591.93 MB
* Size of the generated dataset: 9622.19 MB
* Total amount of disk used: 18214.12 MB
### Dataset Summary
Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
* article: the body of the document, pagragraphs seperated by "/n".
* abstract: the abstract of the document, pagragraphs seperated by "/n".
* section\_names: titles of sections, seperated by "/n".
### Supported Tasks and Leaderboards
### Languages
Dataset Structure
-----------------
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### arxiv
* Size of downloaded dataset files: 4295.97 MB
* Size of the generated dataset: 7231.70 MB
* Total amount of disk used: 11527.66 MB
An example of 'train' looks as follows.
#### pubmed
* Size of downloaded dataset files: 4295.97 MB
* Size of the generated dataset: 2390.49 MB
* Total amount of disk used: 6686.46 MB
An example of 'validation' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### arxiv
* 'article': a 'string' feature.
* 'abstract': a 'string' feature.
* 'section\_names': a 'string' feature.
#### pubmed
* 'article': a 'string' feature.
* 'abstract': a 'string' feature.
* 'section\_names': a 'string' feature.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset.
| [
"### Dataset Summary\n\n\nScientific papers datasets contains two sets of long and structured documents.\nThe datasets are obtained from ArXiv and PubMed OpenAccess repositories.\nBoth \"arxiv\" and \"pubmed\" have two features:\n\n\n* article: the body of the document, pagragraphs seperated by \"/n\".\n* abstract: the abstract of the document, pagragraphs seperated by \"/n\".\n* section\\_names: titles of sections, seperated by \"/n\".",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.",
"### Data Instances",
"#### arxiv\n\n\n* Size of downloaded dataset files: 4295.97 MB\n* Size of the generated dataset: 7231.70 MB\n* Total amount of disk used: 11527.66 MB\n\n\nAn example of 'train' looks as follows.",
"#### pubmed\n\n\n* Size of downloaded dataset files: 4295.97 MB\n* Size of the generated dataset: 2390.49 MB\n* Total amount of disk used: 6686.46 MB\n\n\nAn example of 'validation' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### arxiv\n\n\n* 'article': a 'string' feature.\n* 'abstract': a 'string' feature.\n* 'section\\_names': a 'string' feature.",
"#### pubmed\n\n\n* 'article': a 'string' feature.\n* 'abstract': a 'string' feature.\n* 'section\\_names': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset."
] | [
"TAGS\n#language-English #region-us \n",
"### Dataset Summary\n\n\nScientific papers datasets contains two sets of long and structured documents.\nThe datasets are obtained from ArXiv and PubMed OpenAccess repositories.\nBoth \"arxiv\" and \"pubmed\" have two features:\n\n\n* article: the body of the document, pagragraphs seperated by \"/n\".\n* abstract: the abstract of the document, pagragraphs seperated by \"/n\".\n* section\\_names: titles of sections, seperated by \"/n\".",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.",
"### Data Instances",
"#### arxiv\n\n\n* Size of downloaded dataset files: 4295.97 MB\n* Size of the generated dataset: 7231.70 MB\n* Total amount of disk used: 11527.66 MB\n\n\nAn example of 'train' looks as follows.",
"#### pubmed\n\n\n* Size of downloaded dataset files: 4295.97 MB\n* Size of the generated dataset: 2390.49 MB\n* Total amount of disk used: 6686.46 MB\n\n\nAn example of 'validation' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### arxiv\n\n\n* 'article': a 'string' feature.\n* 'abstract': a 'string' feature.\n* 'section\\_names': a 'string' feature.",
"#### pubmed\n\n\n* 'article': a 'string' feature.\n* 'abstract': a 'string' feature.\n* 'section\\_names': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset."
] |
5fc63ea7788cd5b4edb6aeba801cdc7083cf07e9 |
# Dataset Card for the-reddit-nft-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-nft-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditnftdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditnftdataset)
### Dataset Summary
A comprehensive dataset of Reddit's NFT discussion.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/the-reddit-nft-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-26T18:52:29+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T16:52:49+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for the-reddit-nft-dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Licensing Information
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
A comprehensive dataset of Reddit's NFT discussion.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
"# Dataset Card for the-reddit-nft-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nA comprehensive dataset of Reddit's NFT discussion.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for the-reddit-nft-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nA comprehensive dataset of Reddit's NFT discussion.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] |
739825f9dbb674e44f71019730d403f626aac4be | POS tagging on the Universal Dependencies dataset
| aakanksha/udpos | [
"region:us"
] | 2022-04-27T00:16:51+00:00 | {} | 2022-04-27T18:21:57+00:00 | [] | [] | TAGS
#region-us
| POS tagging on the Universal Dependencies dataset
| [] | [
"TAGS\n#region-us \n"
] |
578d877dd50601749b406d53805a4bd332b63091 | annotations_creators:
- found
language_creators:
- found
languages:
- zh
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: symptom
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- word-sense-disambiguation | junliang/symptom | [
"region:us"
] | 2022-04-27T06:47:35+00:00 | {} | 2022-05-11T11:57:22+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- found
language_creators:
- found
languages:
- zh
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: symptom
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- word-sense-disambiguation | [] | [
"TAGS\n#region-us \n"
] |
a3fc132b1a1b550f82e0801e9ded2ae475b659ea |
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
https://github.com/facebookresearch/LAMA
- **Repository:**
https://github.com/facebookresearch/LAMA
- **Paper:**
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Dataset Summary
This dataset provides the data for LAMA. This dataset only contains TRex
(subset of wikidata triples).
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version also contains questions instead of templates that can be used to probe also non-masking models.
See the paper for more details. For more information, also see:
https://github.com/facebookresearch/LAMA
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
``
{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}
34039
``
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
[More Information Needed]
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
| janck/bigscience-lama | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:fact-checking-retrieval",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"probing",
"region:us"
] | 2022-04-27T08:20:12+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": {"trex": ["1M<n<10M"]}, "task_categories": ["text-retrieval", "text-classification"], "task_ids": ["fact-checking-retrieval", "text-scoring"], "paperswithcode_id": "lama", "pretty_name": "LAMA: LAnguage Model Analysis - BigScience version", "tags": ["probing"]} | 2022-10-21T07:16:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-retrieval #task_categories-text-classification #task_ids-fact-checking-retrieval #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #probing #region-us
|
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
URL
- Repository:
URL
- Paper:
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={URL
}
### Dataset Summary
This dataset provides the data for LAMA. This dataset only contains TRex
(subset of wikidata triples).
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version also contains questions instead of templates that can be used to probe also non-masking models.
See the paper for more details. For more information, also see:
URL
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
''
{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}
34039
''
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see URL
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={URL
}
| [
"# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\n@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\\\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\\\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={URL\n}",
"### Dataset Summary\n\nThis dataset provides the data for LAMA. This dataset only contains TRex\n(subset of wikidata triples).\n\nThe dataset includes some cleanup, and addition of a masked sentence\nand associated answers for the [MASK] token. The accuracy in\npredicting the [MASK] token shows how well the language model knows\nfacts and common sense information. The [MASK] tokens are only for the\n\"object\" slots.\n\nThis version also contains questions instead of templates that can be used to probe also non-masking models.\n\n\nSee the paper for more details. For more information, also see:\nURL",
"### Languages\nen",
"## Dataset Structure",
"### Data Instances\n\n\nThe trex config has the following fields:\n\n\n''\n{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}\n34039\n''",
"### Data Splits\n\nThere are no data splits.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was gathered and created to probe what language models understand.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nSee the reaserch paper and website for more detail. The dataset was\ncreated gathered from various other datasets with cleanups for probing.",
"#### Who are the source language producers?\n\nThe LAMA authors and the original authors of the various configs.",
"### Annotations",
"#### Annotation process\n\nHuman annotations under the original datasets (conceptnet), and various machine annotations.",
"#### Who are the annotators?\n\nHuman annotations and machine annotations.",
"### Personal and Sensitive Information\n\nUnkown, but likely names of famous people.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe goal for the work is to probe the understanding of language models.",
"### Discussion of Biases\n\nSince the data is from human annotators, there is likely to be baises.",
"### Other Known Limitations\n\nThe original documentation for the datafields are limited.",
"## Additional Information",
"### Dataset Curators\n\nThe authors of LAMA at Facebook and the authors of the original datasets.",
"### Licensing Information\n\nThe Creative Commons Attribution-Noncommercial 4.0 International License. see URL\n\n\n\n@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\\\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\\\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={URL\n}"
] | [
"TAGS\n#task_categories-text-retrieval #task_categories-text-classification #task_ids-fact-checking-retrieval #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #probing #region-us \n",
"# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\n@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\\\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\\\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={URL\n}",
"### Dataset Summary\n\nThis dataset provides the data for LAMA. This dataset only contains TRex\n(subset of wikidata triples).\n\nThe dataset includes some cleanup, and addition of a masked sentence\nand associated answers for the [MASK] token. The accuracy in\npredicting the [MASK] token shows how well the language model knows\nfacts and common sense information. The [MASK] tokens are only for the\n\"object\" slots.\n\nThis version also contains questions instead of templates that can be used to probe also non-masking models.\n\n\nSee the paper for more details. For more information, also see:\nURL",
"### Languages\nen",
"## Dataset Structure",
"### Data Instances\n\n\nThe trex config has the following fields:\n\n\n''\n{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}\n34039\n''",
"### Data Splits\n\nThere are no data splits.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was gathered and created to probe what language models understand.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nSee the reaserch paper and website for more detail. The dataset was\ncreated gathered from various other datasets with cleanups for probing.",
"#### Who are the source language producers?\n\nThe LAMA authors and the original authors of the various configs.",
"### Annotations",
"#### Annotation process\n\nHuman annotations under the original datasets (conceptnet), and various machine annotations.",
"#### Who are the annotators?\n\nHuman annotations and machine annotations.",
"### Personal and Sensitive Information\n\nUnkown, but likely names of famous people.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe goal for the work is to probe the understanding of language models.",
"### Discussion of Biases\n\nSince the data is from human annotators, there is likely to be baises.",
"### Other Known Limitations\n\nThe original documentation for the datafields are limited.",
"## Additional Information",
"### Dataset Curators\n\nThe authors of LAMA at Facebook and the authors of the original datasets.",
"### Licensing Information\n\nThe Creative Commons Attribution-Noncommercial 4.0 International License. see URL\n\n\n\n@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\\\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\\\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={URL\n}"
] |
96a234bd25c04939c4a79213eb764ae90e4d0d81 |
# Dataset Card for BANKING77
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets)
- **Repository:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets)
- **Paper:** [ArXiv](https://arxiv.org/abs/2003.04807)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset composed of online banking queries annotated with their corresponding intents.
BANKING77 dataset provides a very fine-grained set of intents in a banking domain.
It comprises 13,083 customer service queries labeled with 77 intents.
It focuses on fine-grained single-domain intent detection.
### Supported Tasks and Leaderboards
Intent classification, intent detection
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'label': 11, # integer label corresponding to "card_arrival" intent
'text': 'I am still waiting on my card?'
}
```
### Data Fields
- `text`: a string feature.
- `label`: One of classification labels (0-76) corresponding to unique intents.
Intent names are mapped to `label` in the following way:
| label | intent (category) |
|---:|:-------------------------------------------------|
| 0 | activate_my_card |
| 1 | age_limit |
| 2 | apple_pay_or_google_pay |
| 3 | atm_support |
| 4 | automatic_top_up |
| 5 | balance_not_updated_after_bank_transfer |
| 6 | balance_not_updated_after_cheque_or_cash_deposit |
| 7 | beneficiary_not_allowed |
| 8 | cancel_transfer |
| 9 | card_about_to_expire |
| 10 | card_acceptance |
| 11 | card_arrival |
| 12 | card_delivery_estimate |
| 13 | card_linking |
| 14 | card_not_working |
| 15 | card_payment_fee_charged |
| 16 | card_payment_not_recognised |
| 17 | card_payment_wrong_exchange_rate |
| 18 | card_swallowed |
| 19 | cash_withdrawal_charge |
| 20 | cash_withdrawal_not_recognised |
| 21 | change_pin |
| 22 | compromised_card |
| 23 | contactless_not_working |
| 24 | country_support |
| 25 | declined_card_payment |
| 26 | declined_cash_withdrawal |
| 27 | declined_transfer |
| 28 | direct_debit_payment_not_recognised |
| 29 | disposable_card_limits |
| 30 | edit_personal_details |
| 31 | exchange_charge |
| 32 | exchange_rate |
| 33 | exchange_via_app |
| 34 | extra_charge_on_statement |
| 35 | failed_transfer |
| 36 | fiat_currency_support |
| 37 | get_disposable_virtual_card |
| 38 | get_physical_card |
| 39 | getting_spare_card |
| 40 | getting_virtual_card |
| 41 | lost_or_stolen_card |
| 42 | lost_or_stolen_phone |
| 43 | order_physical_card |
| 44 | passcode_forgotten |
| 45 | pending_card_payment |
| 46 | pending_cash_withdrawal |
| 47 | pending_top_up |
| 48 | pending_transfer |
| 49 | pin_blocked |
| 50 | receiving_money |
| 51 | Refund_not_showing_up |
| 52 | request_refund |
| 53 | reverted_card_payment? |
| 54 | supported_cards_and_currencies |
| 55 | terminate_account |
| 56 | top_up_by_bank_transfer_charge |
| 57 | top_up_by_card_charge |
| 58 | top_up_by_cash_or_cheque |
| 59 | top_up_failed |
| 60 | top_up_limits |
| 61 | top_up_reverted |
| 62 | topping_up_by_card |
| 63 | transaction_charged_twice |
| 64 | transfer_fee_charged |
| 65 | transfer_into_account |
| 66 | transfer_not_received_by_recipient |
| 67 | transfer_timing |
| 68 | unable_to_verify_identity |
| 69 | verify_my_identity |
| 70 | verify_source_of_funds |
| 71 | verify_top_up |
| 72 | virtual_card_not_working |
| 73 | visa_or_mastercard |
| 74 | why_verify_identity |
| 75 | wrong_amount_of_cash_received |
| 76 | wrong_exchange_rate_for_cash_withdrawal |
### Data Splits
| Dataset statistics | Train | Test |
| --- | --- | --- |
| Number of examples | 10 003 | 3 080 |
| Average character length | 59.5 | 54.2 |
| Number of intents | 77 | 77 |
| Number of domains | 1 | 1 |
## Dataset Creation
### Curation Rationale
Previous intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets ([HWU64](https://github.com/xliuhw/NLU-Evaluation-Data) and [CLINC150](https://github.com/clinc/oos-eval)), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered "in the wild". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. **banking**. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset it to help develop better intent detection systems.
Any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[PolyAI](https://github.com/PolyAI-LDN)
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
```
@inproceedings{Casanueva2020,
author = {I{\~{n}}igo Casanueva and Tadas Temcinas and Daniela Gerz and Matthew Henderson and Ivan Vulic},
title = {Efficient Intent Detection with Dual Sentence Encoders},
year = {2020},
month = {mar},
note = {Data available at https://github.com/PolyAI-LDN/task-specific-datasets},
url = {https://arxiv.org/abs/2003.04807},
booktitle = {Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020}
}
```
### Contributions
Thanks to [@dkajtoch](https://github.com/dkajtoch) for adding this dataset.
| PolyAI/banking77 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2003.04807",
"region:us"
] | 2022-04-27T11:54:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-class-classification"], "pretty_name": "BANKING77", "extended": ["original"]} | 2022-10-25T09:12:22+00:00 | [
"2003.04807"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2003.04807 #region-us
| Dataset Card for BANKING77
==========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: Github
* Repository: Github
* Paper: ArXiv
* Leaderboard:
* Point of Contact:
### Dataset Summary
Dataset composed of online banking queries annotated with their corresponding intents.
BANKING77 dataset provides a very fine-grained set of intents in a banking domain.
It comprises 13,083 customer service queries labeled with 77 intents.
It focuses on fine-grained single-domain intent detection.
### Supported Tasks and Leaderboards
Intent classification, intent detection
### Languages
English
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows:
### Data Fields
* 'text': a string feature.
* 'label': One of classification labels (0-76) corresponding to unique intents.
Intent names are mapped to 'label' in the following way:
### Data Splits
Dataset statistics: Number of examples, Train: 10 003, Test: 3 080
Dataset statistics: Average character length, Train: 59.5, Test: 54.2
Dataset statistics: Number of intents, Train: 77, Test: 77
Dataset statistics: Number of domains, Train: 1, Test: 1
Dataset Creation
----------------
### Curation Rationale
Previous intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets (HWU64 and CLINC150), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered "in the wild". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. banking. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset it to help develop better intent detection systems.
Any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
PolyAI
### Licensing Information
Creative Commons Attribution 4.0 International
### Contributions
Thanks to @dkajtoch for adding this dataset.
| [
"### Dataset Summary\n\n\nDataset composed of online banking queries annotated with their corresponding intents.\n\n\nBANKING77 dataset provides a very fine-grained set of intents in a banking domain.\nIt comprises 13,083 customer service queries labeled with 77 intents.\nIt focuses on fine-grained single-domain intent detection.",
"### Supported Tasks and Leaderboards\n\n\nIntent classification, intent detection",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows:",
"### Data Fields\n\n\n* 'text': a string feature.\n* 'label': One of classification labels (0-76) corresponding to unique intents.\n\n\nIntent names are mapped to 'label' in the following way:",
"### Data Splits\n\n\nDataset statistics: Number of examples, Train: 10 003, Test: 3 080\nDataset statistics: Average character length, Train: 59.5, Test: 54.2\nDataset statistics: Number of intents, Train: 77, Test: 77\nDataset statistics: Number of domains, Train: 1, Test: 1\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nPrevious intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets (HWU64 and CLINC150), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered \"in the wild\". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. banking. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nThe dataset does not contain any additional annotations.",
"#### Who are the annotators?\n\n\n[N/A]",
"### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset it to help develop better intent detection systems.\n\n\nAny comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nPolyAI",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\n\nThanks to @dkajtoch for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2003.04807 #region-us \n",
"### Dataset Summary\n\n\nDataset composed of online banking queries annotated with their corresponding intents.\n\n\nBANKING77 dataset provides a very fine-grained set of intents in a banking domain.\nIt comprises 13,083 customer service queries labeled with 77 intents.\nIt focuses on fine-grained single-domain intent detection.",
"### Supported Tasks and Leaderboards\n\n\nIntent classification, intent detection",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows:",
"### Data Fields\n\n\n* 'text': a string feature.\n* 'label': One of classification labels (0-76) corresponding to unique intents.\n\n\nIntent names are mapped to 'label' in the following way:",
"### Data Splits\n\n\nDataset statistics: Number of examples, Train: 10 003, Test: 3 080\nDataset statistics: Average character length, Train: 59.5, Test: 54.2\nDataset statistics: Number of intents, Train: 77, Test: 77\nDataset statistics: Number of domains, Train: 1, Test: 1\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nPrevious intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets (HWU64 and CLINC150), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered \"in the wild\". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. banking. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nThe dataset does not contain any additional annotations.",
"#### Who are the annotators?\n\n\n[N/A]",
"### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset it to help develop better intent detection systems.\n\n\nAny comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nPolyAI",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\n\nThanks to @dkajtoch for adding this dataset."
] |
e504e76976d00dbe5d2ee3945bf6d42c65e2bd1d | # AutoTrain Dataset for project: Rule
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project Rule.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u672c\u516c\u53f8\u4f1a\u5728\u60a8\u767b\u5f55\u53ca\u7248\u672c\u66f4\u65b0\u65f6\u4ee5\u63a8\u9001\u901a\u77e5\u3001\u5f39\u6846\u7684\u5f62\u5f0f\u5411\u60a8\u5c55\u793a\u53d8\u66f4\u540e\u7684\u9690\u79c1\u653f\u7b56",
"target": 1
},
{
"text": "\u6211\u4eec\u53ef\u80fd\u9002\u65f6\u4f1a\u5bf9\u672c\u9690\u79c1\u6743\u653f\u7b56\u8fdb\u884c\u8c03\u6574\u6216\u53d8\u66f4\uff0c\u672c\u9690\u79c1\u6743\u653f\u7b56\u7684\u4efb\u4f55\u66f4\u65b0\u5c06\u4ee5\u6807\u6ce8\u66f4\u65b0\u65f6\u95f4\u7684\u65b9\u5f0f\u516c\u5e03\u5728\u6211\u4eec\u7f51\u7ad9\u4e0a\uff0c\u9664\u6cd5\u5f8b\u6cd5\u89c4\u6216\u76d1\u7ba1\u89c4\u5b9a\u53e6\u6709\u5f3a\u5236\u6027\u89c4\u5b9a\u5916\uff0c\u7ecf\u8c03\u6574\u6216\u53d8\u66f4\u7684\u5185\u5bb9\u4e00\u7ecf\u901a\u77e5\u6216\u516c\u5e03\u540e\u76847\u65e5\u540e\u751f\u6548",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 70 |
| valid | 19 |
| EAST/autotrain-data-Rule | [
"task_categories:text-classification",
"language:zh",
"region:us"
] | 2022-04-27T13:55:32+00:00 | {"language": ["zh"], "task_categories": ["text-classification"]} | 2022-10-25T09:12:41+00:00 | [] | [
"zh"
] | TAGS
#task_categories-text-classification #language-Chinese #region-us
| AutoTrain Dataset for project: Rule
===================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project Rule.
### Languages
The BCP-47 code for the dataset's language is zh.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is zh.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-Chinese #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is zh.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
904009d6f19d0d4eabb8f1471a92d49c839920d5 |
This dataset contains sentence-level formality annotations used in the 2016
TACL paper "An Empirical Analysis of Formality in Online Communication"
(Pavlick and Tetreault, 2016). It includes sentences from four genres (news,
blogs, email, and QA forums), all annotated by humans on Amazon Mechanical
Turk. The news and blog data was collected by Shibamouli Lahiri, and we are
redistributing it here for the convenience of other researchers. We collected
the email and answers data ourselves, using a similar annotation setup to
Shibamouli.
In the original dataset, `answers` and `email` were tokenized. In this version,
Oleksiy Syvokon detokenized them with `moses-detokenizer` and a bunch of
additional regexps.
If you use this data in your work, please cite BOTH of the below papers:
```
@article{PavlickAndTetreault-2016:TACL,
author = {Ellie Pavlick and Joel Tetreault},
title = {An Empirical Analysis of Formality in Online Communication},
journal = {Transactions of the Association for Computational Linguistics},
year = {2016},
publisher = {Association for Computational Linguistics}
}
@article{Lahiri-2015:arXiv,
title={{SQUINKY! A} Corpus of Sentence-level Formality, Informativeness, and Implicature},
author={Lahiri, Shibamouli},
journal={arXiv preprint arXiv:1506.02306},
year={2015}
}
```
## Contents
The annotated data files and number of lines in each are as follows:
* 4977 answers -- Annotated sentences from a random sample of posts from the Yahoo! Answers forums: https://answers.yahoo.com/
* 1821 blog -- Annotated sentences from the top 100 blogs listed on http://technorati.com/ on October 31, 2009.
* 1701 email -- Annotated sentences from a random sample of emails from the Jeb Bush email archive: http://americanbridgepac.org/jeb-bushs-gubernatorial-email-archive/
* 2775 news -- Annotated sentences from the "breaking", "recent", and "local" news sections of the following 20 news sites: CNN, CBS News, ABC News, Reuters, BBC News Online, New York Times, Los Angeles Times, The Guardian (U.K.), Voice of America, Boston Globe, Chicago Tribune, San Francisco Chronicle, Times Online (U.K.), news.com.au, Xinhua, The Times of India, Seattle Post Intelligencer, Daily Mail, and Bloomberg L.P.
## Format
Each record contains the following fields:
1. `avg_score`: the mean formality rating, which ranges from -3 to 3 where lower scores indicate less formal sentences
2. `sentence`
| osyvokon/pavlick-formality-scores | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-04-27T14:28:07+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring"], "pretty_name": "Sentence-level formality annotations for news, blogs, email and QA forums.\n\nPublished in \"An Empirical Analysis of Formality in Online Communication\" (Pavlick and Tetreault, 2016) "} | 2023-11-24T16:41:26+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-3.0 #region-us
|
This dataset contains sentence-level formality annotations used in the 2016
TACL paper "An Empirical Analysis of Formality in Online Communication"
(Pavlick and Tetreault, 2016). It includes sentences from four genres (news,
blogs, email, and QA forums), all annotated by humans on Amazon Mechanical
Turk. The news and blog data was collected by Shibamouli Lahiri, and we are
redistributing it here for the convenience of other researchers. We collected
the email and answers data ourselves, using a similar annotation setup to
Shibamouli.
In the original dataset, 'answers' and 'email' were tokenized. In this version,
Oleksiy Syvokon detokenized them with 'moses-detokenizer' and a bunch of
additional regexps.
If you use this data in your work, please cite BOTH of the below papers:
## Contents
The annotated data files and number of lines in each are as follows:
* 4977 answers -- Annotated sentences from a random sample of posts from the Yahoo! Answers forums: URL
* 1821 blog -- Annotated sentences from the top 100 blogs listed on URL on October 31, 2009.
* 1701 email -- Annotated sentences from a random sample of emails from the Jeb Bush email archive: URL
* 2775 news -- Annotated sentences from the "breaking", "recent", and "local" news sections of the following 20 news sites: CNN, CBS News, ABC News, Reuters, BBC News Online, New York Times, Los Angeles Times, The Guardian (U.K.), Voice of America, Boston Globe, Chicago Tribune, San Francisco Chronicle, Times Online (U.K.), URL, Xinhua, The Times of India, Seattle Post Intelligencer, Daily Mail, and Bloomberg L.P.
## Format
Each record contains the following fields:
1. 'avg_score': the mean formality rating, which ranges from -3 to 3 where lower scores indicate less formal sentences
2. 'sentence'
| [
"## Contents\n\nThe annotated data files and number of lines in each are as follows:\n\n* 4977 answers -- Annotated sentences from a random sample of posts from the Yahoo! Answers forums: URL\n* 1821 blog -- Annotated sentences from the top 100 blogs listed on URL on October 31, 2009.\n* 1701 email -- Annotated sentences from a random sample of emails from the Jeb Bush email archive: URL\n* 2775 news -- Annotated sentences from the \"breaking\", \"recent\", and \"local\" news sections of the following 20 news sites: CNN, CBS News, ABC News, Reuters, BBC News Online, New York Times, Los Angeles Times, The Guardian (U.K.), Voice of America, Boston Globe, Chicago Tribune, San Francisco Chronicle, Times Online (U.K.), URL, Xinhua, The Times of India, Seattle Post Intelligencer, Daily Mail, and Bloomberg L.P.",
"## Format\n\nEach record contains the following fields:\n\n1. 'avg_score': the mean formality rating, which ranges from -3 to 3 where lower scores indicate less formal sentences\n2. 'sentence'"
] | [
"TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-3.0 #region-us \n",
"## Contents\n\nThe annotated data files and number of lines in each are as follows:\n\n* 4977 answers -- Annotated sentences from a random sample of posts from the Yahoo! Answers forums: URL\n* 1821 blog -- Annotated sentences from the top 100 blogs listed on URL on October 31, 2009.\n* 1701 email -- Annotated sentences from a random sample of emails from the Jeb Bush email archive: URL\n* 2775 news -- Annotated sentences from the \"breaking\", \"recent\", and \"local\" news sections of the following 20 news sites: CNN, CBS News, ABC News, Reuters, BBC News Online, New York Times, Los Angeles Times, The Guardian (U.K.), Voice of America, Boston Globe, Chicago Tribune, San Francisco Chronicle, Times Online (U.K.), URL, Xinhua, The Times of India, Seattle Post Intelligencer, Daily Mail, and Bloomberg L.P.",
"## Format\n\nEach record contains the following fields:\n\n1. 'avg_score': the mean formality rating, which ranges from -3 to 3 where lower scores indicate less formal sentences\n2. 'sentence'"
] |
2d04a4fde26d92fbed2ae7604677736e6b901a4d | This file contains news texts (sentences) belonging to different writing styles. The original dataset created by {*Upeksha, D., Wijayarathna, C., Siriwardena, M.,
Lasandun, L., Wimalasuriya, C., de Silva, N., and Dias, G. (2015). Implementing a corpus for Sinhala language. 01*}is processed and cleaned.
If you use this dataset, please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} and the above mentioned paper. | NLPC-UOM/Writing-style-classification | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:si",
"license:mit",
"region:us"
] | 2022-04-27T17:08:07+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["si"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "sinhala-writing-style-classification"} | 2022-10-25T09:12:46+00:00 | [] | [
"si"
] | TAGS
#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #language-Sinhala #license-mit #region-us
| This file contains news texts (sentences) belonging to different writing styles. The original dataset created by {*Upeksha, D., Wijayarathna, C., Siriwardena, M.,
Lasandun, L., Wimalasuriya, C., de Silva, N., and Dias, G. (2015). Implementing a corpus for Sinhala language. 01*}is processed and cleaned.
If you use this dataset, please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} and the above mentioned paper. | [] | [
"TAGS\n#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #language-Sinhala #license-mit #region-us \n"
] |
e220d18727665580a083db6162dd78f9e0f61438 |
Original from TODO.
# Data Statement for SHADES
> **How to use this document:**
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: https://www.aclweb.org/anthology/Q18-1041/ .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "DATASTATEMENT.md". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Bias-Shades
Citation (if available): TODO.
Data set developer(s): TODO.
Data statement author(s): TODO.
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
This dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted.
<!-- We then adapted the crowdsourcing method described by [Nangia et al. (2020)](https://arxiv.org/pdf/2010.00133) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC [(Fiumara et al., 2020)](https://www.aclweb.org/anthology/2020.cllrd-1.1.pdf), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (https://languagearc.com/projects/19) to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way. -->
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: en-US, fr-FR, ....
* Language variety description: English spoken by native American speakers, native French people from metropolitan France.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
N/A
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in French, they might be upsetting.
Along these stereotyped statements are paired anti-stereotyped statements.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
Examples were gathered using the LanguageArc site and by creating a dedicated project: https://languagearc.com/projects/19
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: [[email protected]](mailto:[email protected]). This document template is licensed as [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the [2020 LREC workshop on Data Statements](https://sites.google.com/uw.edu/data-statements-for-nlp/), by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski. | bigscience-catalogue-data/bias-shades | [
"license:cc-by-sa-4.0",
"arxiv:2010.00133",
"region:us"
] | 2022-04-27T17:49:56+00:00 | {"license": "cc-by-sa-4.0"} | 2022-05-01T14:18:02+00:00 | [
"2010.00133"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2010.00133 #region-us
|
Original from TODO.
# Data Statement for SHADES
> How to use this document:
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: URL .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "URL". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Bias-Shades
Citation (if available): TODO.
Data set developer(s): TODO.
Data statement author(s): TODO.
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
This dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: en-US, fr-FR, ....
* Language variety description: English spoken by native American speakers, native French people from metropolitan France.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
N/A
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in French, they might be upsetting.
Along these stereotyped statements are paired anti-stereotyped statements.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
Examples were gathered using the LanguageArc site and by creating a dedicated project: URL
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski. | [
"# Data Statement for SHADES\n\n> How to use this document:\n> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.\n\n> For full details, the best source is the original Data Statements paper, here: URL .\n\n> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as \"URL\". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.\n\n> Only blockquoted content should be deleted; the final about statement should be left intact.\n\nData set name: Bias-Shades\n\nCitation (if available): TODO.\n\nData set developer(s): TODO.\n\nData statement author(s): TODO.\n\nOthers who contributed to this document: N/A\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## A. CURATION RATIONALE \n\n> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.\n\nThis dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted.",
"## B. LANGUAGE VARIETY/VARIETIES\n\n> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., \"English as spoken in Palo Alto, California\", or \"Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin\").\n\n* BCP-47 language tags: en-US, fr-FR, ....\n* Language variety description: English spoken by native American speakers, native French people from metropolitan France.",
"## C. CONTRIBUTOR DEMOGRAPHIC\n> ## C. SPEAKER DEMOGRAPHIC\n\n> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: \n\nN/A",
"## D. ANNOTATOR DEMOGRAPHIC\n\n> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:\n\nParticipants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group.",
"## E. SPEECH SITUATION\n\nN/A",
"## F. TEXT CHARACTERISTICS\n\n> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.\n\nCollected data is a collection of offensive stereotyped statements in French, they might be upsetting.\n\nAlong these stereotyped statements are paired anti-stereotyped statements.",
"## G. RECORDING QUALITY\n\nN/A",
"## H. OTHER\n\n> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.",
"## I. PROVENANCE APPENDIX\n\nExamples were gathered using the LanguageArc site and by creating a dedicated project: URL",
"## About this document\n\nA data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.\n\nData Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.\n\nThis version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski."
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2010.00133 #region-us \n",
"# Data Statement for SHADES\n\n> How to use this document:\n> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.\n\n> For full details, the best source is the original Data Statements paper, here: URL .\n\n> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as \"URL\". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.\n\n> Only blockquoted content should be deleted; the final about statement should be left intact.\n\nData set name: Bias-Shades\n\nCitation (if available): TODO.\n\nData set developer(s): TODO.\n\nData statement author(s): TODO.\n\nOthers who contributed to this document: N/A\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## A. CURATION RATIONALE \n\n> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.\n\nThis dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted.",
"## B. LANGUAGE VARIETY/VARIETIES\n\n> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., \"English as spoken in Palo Alto, California\", or \"Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin\").\n\n* BCP-47 language tags: en-US, fr-FR, ....\n* Language variety description: English spoken by native American speakers, native French people from metropolitan France.",
"## C. CONTRIBUTOR DEMOGRAPHIC\n> ## C. SPEAKER DEMOGRAPHIC\n\n> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: \n\nN/A",
"## D. ANNOTATOR DEMOGRAPHIC\n\n> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:\n\nParticipants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group.",
"## E. SPEECH SITUATION\n\nN/A",
"## F. TEXT CHARACTERISTICS\n\n> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.\n\nCollected data is a collection of offensive stereotyped statements in French, they might be upsetting.\n\nAlong these stereotyped statements are paired anti-stereotyped statements.",
"## G. RECORDING QUALITY\n\nN/A",
"## H. OTHER\n\n> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.",
"## I. PROVENANCE APPENDIX\n\nExamples were gathered using the LanguageArc site and by creating a dedicated project: URL",
"## About this document\n\nA data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.\n\nData Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.\n\nThis version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski."
] |
f004a913f9170e5ff39e63f6172ee3ae918197c9 | mapping:
```
n01440764 tench, Tinca tinca
n01443537 goldfish, Carassius auratus
n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
n01491361 tiger shark, Galeocerdo cuvieri
n01494475 hammerhead, hammerhead shark
n01496331 electric ray, crampfish, numbfish, torpedo
n01498041 stingray
n01514668 cock
n01514859 hen
n01518878 ostrich, Struthio camelus
n01530575 brambling, Fringilla montifringilla
n01531178 goldfinch, Carduelis carduelis
n01532829 house finch, linnet, Carpodacus mexicanus
n01534433 junco, snowbird
n01537544 indigo bunting, indigo finch, indigo bird, Passerina cyanea
n01558993 robin, American robin, Turdus migratorius
n01560419 bulbul
n01580077 jay
n01582220 magpie
n01592084 chickadee
n01601694 water ouzel, dipper
n01608432 kite
n01614925 bald eagle, American eagle, Haliaeetus leucocephalus
n01616318 vulture
n01622779 great grey owl, great gray owl, Strix nebulosa
n01629819 European fire salamander, Salamandra salamandra
n01630670 common newt, Triturus vulgaris
n01631663 eft
n01632458 spotted salamander, Ambystoma maculatum
n01632777 axolotl, mud puppy, Ambystoma mexicanum
n01641577 bullfrog, Rana catesbeiana
n01644373 tree frog, tree-frog
n01644900 tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
n01664065 loggerhead, loggerhead turtle, Caretta caretta
n01665541 leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
n01667114 mud turtle
n01667778 terrapin
n01669191 box turtle, box tortoise
n01675722 banded gecko
n01677366 common iguana, iguana, Iguana iguana
n01682714 American chameleon, anole, Anolis carolinensis
n01685808 whiptail, whiptail lizard
n01687978 agama
n01688243 frilled lizard, Chlamydosaurus kingi
n01689811 alligator lizard
n01692333 Gila monster, Heloderma suspectum
n01693334 green lizard, Lacerta viridis
n01694178 African chameleon, Chamaeleo chamaeleon
n01695060 Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis
n01697457 African crocodile, Nile crocodile, Crocodylus niloticus
n01698640 American alligator, Alligator mississipiensis
n01704323 triceratops
n01728572 thunder snake, worm snake, Carphophis amoenus
n01728920 ringneck snake, ring-necked snake, ring snake
n01729322 hognose snake, puff adder, sand viper
n01729977 green snake, grass snake
n01734418 king snake, kingsnake
n01735189 garter snake, grass snake
n01737021 water snake
n01739381 vine snake
n01740131 night snake, Hypsiglena torquata
n01742172 boa constrictor, Constrictor constrictor
n01744401 rock python, rock snake, Python sebae
n01748264 Indian cobra, Naja naja
n01749939 green mamba
n01751748 sea snake
n01753488 horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
n01755581 diamondback, diamondback rattlesnake, Crotalus adamanteus
n01756291 sidewinder, horned rattlesnake, Crotalus cerastes
n01768244 trilobite
n01770081 harvestman, daddy longlegs, Phalangium opilio
n01770393 scorpion
n01773157 black and gold garden spider, Argiope aurantia
n01773549 barn spider, Araneus cavaticus
n01773797 garden spider, Aranea diademata
n01774384 black widow, Latrodectus mactans
n01774750 tarantula
n01775062 wolf spider, hunting spider
n01776313 tick
n01784675 centipede
n01795545 black grouse
n01796340 ptarmigan
n01797886 ruffed grouse, partridge, Bonasa umbellus
n01798484 prairie chicken, prairie grouse, prairie fowl
n01806143 peacock
n01806567 quail
n01807496 partridge
n01817953 African grey, African gray, Psittacus erithacus
n01818515 macaw
n01819313 sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
n01820546 lorikeet
n01824575 coucal
n01828970 bee eater
n01829413 hornbill
n01833805 hummingbird
n01843065 jacamar
n01843383 toucan
n01847000 drake
n01855032 red-breasted merganser, Mergus serrator
n01855672 goose
n01860187 black swan, Cygnus atratus
n01871265 tusker
n01872401 echidna, spiny anteater, anteater
n01873310 platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus
n01877812 wallaby, brush kangaroo
n01882714 koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
n01883070 wombat
n01910747 jellyfish
n01914609 sea anemone, anemone
n01917289 brain coral
n01924916 flatworm, platyhelminth
n01930112 nematode, nematode worm, roundworm
n01943899 conch
n01944390 snail
n01945685 slug
n01950731 sea slug, nudibranch
n01955084 chiton, coat-of-mail shell, sea cradle, polyplacophore
n01968897 chambered nautilus, pearly nautilus, nautilus
n01978287 Dungeness crab, Cancer magister
n01978455 rock crab, Cancer irroratus
n01980166 fiddler crab
n01981276 king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica
n01983481 American lobster, Northern lobster, Maine lobster, Homarus americanus
n01984695 spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
n01985128 crayfish, crawfish, crawdad, crawdaddy
n01986214 hermit crab
n01990800 isopod
n02002556 white stork, Ciconia ciconia
n02002724 black stork, Ciconia nigra
n02006656 spoonbill
n02007558 flamingo
n02009229 little blue heron, Egretta caerulea
n02009912 American egret, great white heron, Egretta albus
n02011460 bittern
n02012849 crane
n02013706 limpkin, Aramus pictus
n02017213 European gallinule, Porphyrio porphyrio
n02018207 American coot, marsh hen, mud hen, water hen, Fulica americana
n02018795 bustard
n02025239 ruddy turnstone, Arenaria interpres
n02027492 red-backed sandpiper, dunlin, Erolia alpina
n02028035 redshank, Tringa totanus
n02033041 dowitcher
n02037110 oystercatcher, oyster catcher
n02051845 pelican
n02056570 king penguin, Aptenodytes patagonica
n02058221 albatross, mollymawk
n02066245 grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus
n02071294 killer whale, killer, orca, grampus, sea wolf, Orcinus orca
n02074367 dugong, Dugong dugon
n02077923 sea lion
n02085620 Chihuahua
n02085782 Japanese spaniel
n02085936 Maltese dog, Maltese terrier, Maltese
n02086079 Pekinese, Pekingese, Peke
n02086240 Shih-Tzu
n02086646 Blenheim spaniel
n02086910 papillon
n02087046 toy terrier
n02087394 Rhodesian ridgeback
n02088094 Afghan hound, Afghan
n02088238 basset, basset hound
n02088364 beagle
n02088466 bloodhound, sleuthhound
n02088632 bluetick
n02089078 black-and-tan coonhound
n02089867 Walker hound, Walker foxhound
n02089973 English foxhound
n02090379 redbone
n02090622 borzoi, Russian wolfhound
n02090721 Irish wolfhound
n02091032 Italian greyhound
n02091134 whippet
n02091244 Ibizan hound, Ibizan Podenco
n02091467 Norwegian elkhound, elkhound
n02091635 otterhound, otter hound
n02091831 Saluki, gazelle hound
n02092002 Scottish deerhound, deerhound
n02092339 Weimaraner
n02093256 Staffordshire bullterrier, Staffordshire bull terrier
n02093428 American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier
n02093647 Bedlington terrier
n02093754 Border terrier
n02093859 Kerry blue terrier
n02093991 Irish terrier
n02094114 Norfolk terrier
n02094258 Norwich terrier
n02094433 Yorkshire terrier
n02095314 wire-haired fox terrier
n02095570 Lakeland terrier
n02095889 Sealyham terrier, Sealyham
n02096051 Airedale, Airedale terrier
n02096177 cairn, cairn terrier
n02096294 Australian terrier
n02096437 Dandie Dinmont, Dandie Dinmont terrier
n02096585 Boston bull, Boston terrier
n02097047 miniature schnauzer
n02097130 giant schnauzer
n02097209 standard schnauzer
n02097298 Scotch terrier, Scottish terrier, Scottie
n02097474 Tibetan terrier, chrysanthemum dog
n02097658 silky terrier, Sydney silky
n02098105 soft-coated wheaten terrier
n02098286 West Highland white terrier
n02098413 Lhasa, Lhasa apso
n02099267 flat-coated retriever
n02099429 curly-coated retriever
n02099601 golden retriever
n02099712 Labrador retriever
n02099849 Chesapeake Bay retriever
n02100236 German short-haired pointer
n02100583 vizsla, Hungarian pointer
n02100735 English setter
n02100877 Irish setter, red setter
n02101006 Gordon setter
n02101388 Brittany spaniel
n02101556 clumber, clumber spaniel
n02102040 English springer, English springer spaniel
n02102177 Welsh springer spaniel
n02102318 cocker spaniel, English cocker spaniel, cocker
n02102480 Sussex spaniel
n02102973 Irish water spaniel
n02104029 kuvasz
n02104365 schipperke
n02105056 groenendael
n02105162 malinois
n02105251 briard
n02105412 kelpie
n02105505 komondor
n02105641 Old English sheepdog, bobtail
n02105855 Shetland sheepdog, Shetland sheep dog, Shetland
n02106030 collie
n02106166 Border collie
n02106382 Bouvier des Flandres, Bouviers des Flandres
n02106550 Rottweiler
n02106662 German shepherd, German shepherd dog, German police dog, alsatian
n02107142 Doberman, Doberman pinscher
n02107312 miniature pinscher
n02107574 Greater Swiss Mountain dog
n02107683 Bernese mountain dog
n02107908 Appenzeller
n02108000 EntleBucher
n02108089 boxer
n02108422 bull mastiff
n02108551 Tibetan mastiff
n02108915 French bulldog
n02109047 Great Dane
n02109525 Saint Bernard, St Bernard
n02109961 Eskimo dog, husky
n02110063 malamute, malemute, Alaskan malamute
n02110185 Siberian husky
n02110341 dalmatian, coach dog, carriage dog
n02110627 affenpinscher, monkey pinscher, monkey dog
n02110806 basenji
n02110958 pug, pug-dog
n02111129 Leonberg
n02111277 Newfoundland, Newfoundland dog
n02111500 Great Pyrenees
n02111889 Samoyed, Samoyede
n02112018 Pomeranian
n02112137 chow, chow chow
n02112350 keeshond
n02112706 Brabancon griffon
n02113023 Pembroke, Pembroke Welsh corgi
n02113186 Cardigan, Cardigan Welsh corgi
n02113624 toy poodle
n02113712 miniature poodle
n02113799 standard poodle
n02113978 Mexican hairless
n02114367 timber wolf, grey wolf, gray wolf, Canis lupus
n02114548 white wolf, Arctic wolf, Canis lupus tundrarum
n02114712 red wolf, maned wolf, Canis rufus, Canis niger
n02114855 coyote, prairie wolf, brush wolf, Canis latrans
n02115641 dingo, warrigal, warragal, Canis dingo
n02115913 dhole, Cuon alpinus
n02116738 African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
n02117135 hyena, hyaena
n02119022 red fox, Vulpes vulpes
n02119789 kit fox, Vulpes macrotis
n02120079 Arctic fox, white fox, Alopex lagopus
n02120505 grey fox, gray fox, Urocyon cinereoargenteus
n02123045 tabby, tabby cat
n02123159 tiger cat
n02123394 Persian cat
n02123597 Siamese cat, Siamese
n02124075 Egyptian cat
n02125311 cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
n02127052 lynx, catamount
n02128385 leopard, Panthera pardus
n02128757 snow leopard, ounce, Panthera uncia
n02128925 jaguar, panther, Panthera onca, Felis onca
n02129165 lion, king of beasts, Panthera leo
n02129604 tiger, Panthera tigris
n02130308 cheetah, chetah, Acinonyx jubatus
n02132136 brown bear, bruin, Ursus arctos
n02133161 American black bear, black bear, Ursus americanus, Euarctos americanus
n02134084 ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
n02134418 sloth bear, Melursus ursinus, Ursus ursinus
n02137549 mongoose
n02138441 meerkat, mierkat
n02165105 tiger beetle
n02165456 ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
n02167151 ground beetle, carabid beetle
n02168699 long-horned beetle, longicorn, longicorn beetle
n02169497 leaf beetle, chrysomelid
n02172182 dung beetle
n02174001 rhinoceros beetle
n02177972 weevil
n02190166 fly
n02206856 bee
n02219486 ant, emmet, pismire
n02226429 grasshopper, hopper
n02229544 cricket
n02231487 walking stick, walkingstick, stick insect
n02233338 cockroach, roach
n02236044 mantis, mantid
n02256656 cicada, cicala
n02259212 leafhopper
n02264363 lacewing, lacewing fly
n02268443 dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk
n02268853 damselfly
n02276258 admiral
n02277742 ringlet, ringlet butterfly
n02279972 monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
n02280649 cabbage butterfly
n02281406 sulphur butterfly, sulfur butterfly
n02281787 lycaenid, lycaenid butterfly
n02317335 starfish, sea star
n02319095 sea urchin
n02321529 sea cucumber, holothurian
n02325366 wood rabbit, cottontail, cottontail rabbit
n02326432 hare
n02328150 Angora, Angora rabbit
n02342885 hamster
n02346627 porcupine, hedgehog
n02356798 fox squirrel, eastern fox squirrel, Sciurus niger
n02361337 marmot
n02363005 beaver
n02364673 guinea pig, Cavia cobaya
n02389026 sorrel
n02391049 zebra
n02395406 hog, pig, grunter, squealer, Sus scrofa
n02396427 wild boar, boar, Sus scrofa
n02397096 warthog
n02398521 hippopotamus, hippo, river horse, Hippopotamus amphibius
n02403003 ox
n02408429 water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
n02410509 bison
n02412080 ram, tup
n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
n02417914 ibex, Capra ibex
n02422106 hartebeest
n02422699 impala, Aepyceros melampus
n02423022 gazelle
n02437312 Arabian camel, dromedary, Camelus dromedarius
n02437616 llama
n02441942 weasel
n02442845 mink
n02443114 polecat, fitch, foulmart, foumart, Mustela putorius
n02443484 black-footed ferret, ferret, Mustela nigripes
n02444819 otter
n02445715 skunk, polecat, wood pussy
n02447366 badger
n02454379 armadillo
n02457408 three-toed sloth, ai, Bradypus tridactylus
n02480495 orangutan, orang, orangutang, Pongo pygmaeus
n02480855 gorilla, Gorilla gorilla
n02481823 chimpanzee, chimp, Pan troglodytes
n02483362 gibbon, Hylobates lar
n02483708 siamang, Hylobates syndactylus, Symphalangus syndactylus
n02484975 guenon, guenon monkey
n02486261 patas, hussar monkey, Erythrocebus patas
n02486410 baboon
n02487347 macaque
n02488291 langur
n02488702 colobus, colobus monkey
n02489166 proboscis monkey, Nasalis larvatus
n02490219 marmoset
n02492035 capuchin, ringtail, Cebus capucinus
n02492660 howler monkey, howler
n02493509 titi, titi monkey
n02493793 spider monkey, Ateles geoffroyi
n02494079 squirrel monkey, Saimiri sciureus
n02497673 Madagascar cat, ring-tailed lemur, Lemur catta
n02500267 indri, indris, Indri indri, Indri brevicaudatus
n02504013 Indian elephant, Elephas maximus
n02504458 African elephant, Loxodonta africana
n02509815 lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
n02510455 giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
n02514041 barracouta, snoek
n02526121 eel
n02536864 coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch
n02606052 rock beauty, Holocanthus tricolor
n02607072 anemone fish
n02640242 sturgeon
n02641379 gar, garfish, garpike, billfish, Lepisosteus osseus
n02643566 lionfish
n02655020 puffer, pufferfish, blowfish, globefish
n02666196 abacus
n02667093 abaya
n02669723 academic gown, academic robe, judge's robe
n02672831 accordion, piano accordion, squeeze box
n02676566 acoustic guitar
n02687172 aircraft carrier, carrier, flattop, attack aircraft carrier
n02690373 airliner
n02692877 airship, dirigible
n02699494 altar
n02701002 ambulance
n02704792 amphibian, amphibious vehicle
n02708093 analog clock
n02727426 apiary, bee house
n02730930 apron
n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin
n02749479 assault rifle, assault gun
n02769748 backpack, back pack, knapsack, packsack, rucksack, haversack
n02776631 bakery, bakeshop, bakehouse
n02777292 balance beam, beam
n02782093 balloon
n02783161 ballpoint, ballpoint pen, ballpen, Biro
n02786058 Band Aid
n02787622 banjo
n02788148 bannister, banister, balustrade, balusters, handrail
n02790996 barbell
n02791124 barber chair
n02791270 barbershop
n02793495 barn
n02794156 barometer
n02795169 barrel, cask
n02797295 barrow, garden cart, lawn cart, wheelbarrow
n02799071 baseball
n02802426 basketball
n02804414 bassinet
n02804610 bassoon
n02807133 bathing cap, swimming cap
n02808304 bath towel
n02808440 bathtub, bathing tub, bath, tub
n02814533 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
n02814860 beacon, lighthouse, beacon light, pharos
n02815834 beaker
n02817516 bearskin, busby, shako
n02823428 beer bottle
n02823750 beer glass
n02825657 bell cote, bell cot
n02834397 bib
n02835271 bicycle-built-for-two, tandem bicycle, tandem
n02837789 bikini, two-piece
n02840245 binder, ring-binder
n02841315 binoculars, field glasses, opera glasses
n02843684 birdhouse
n02859443 boathouse
n02860847 bobsled, bobsleigh, bob
n02865351 bolo tie, bolo, bola tie, bola
n02869837 bonnet, poke bonnet
n02870880 bookcase
n02871525 bookshop, bookstore, bookstall
n02877765 bottlecap
n02879718 bow
n02883205 bow tie, bow-tie, bowtie
n02892201 brass, memorial tablet, plaque
n02892767 brassiere, bra, bandeau
n02894605 breakwater, groin, groyne, mole, bulwark, seawall, jetty
n02895154 breastplate, aegis, egis
n02906734 broom
n02909870 bucket, pail
n02910353 buckle
n02916936 bulletproof vest
n02917067 bullet train, bullet
n02927161 butcher shop, meat market
n02930766 cab, hack, taxi, taxicab
n02939185 caldron, cauldron
n02948072 candle, taper, wax light
n02950826 cannon
n02951358 canoe
n02951585 can opener, tin opener
n02963159 cardigan
n02965783 car mirror
n02966193 carousel, carrousel, merry-go-round, roundabout, whirligig
n02966687 carpenter's kit, tool kit
n02971356 carton
n02974003 car wheel
n02977058 cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM
n02978881 cassette
n02979186 cassette player
n02980441 castle
n02981792 catamaran
n02988304 CD player
n02992211 cello, violoncello
n02992529 cellular telephone, cellular phone, cellphone, cell, mobile phone
n02999410 chain
n03000134 chainlink fence
n03000247 chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour
n03000684 chain saw, chainsaw
n03014705 chest
n03016953 chiffonier, commode
n03017168 chime, bell, gong
n03018349 china cabinet, china closet
n03026506 Christmas stocking
n03028079 church, church building
n03032252 cinema, movie theater, movie theatre, movie house, picture palace
n03041632 cleaver, meat cleaver, chopper
n03042490 cliff dwelling
n03045698 cloak
n03047690 clog, geta, patten, sabot
n03062245 cocktail shaker
n03063599 coffee mug
n03063689 coffeepot
n03065424 coil, spiral, volute, whorl, helix
n03075370 combination lock
n03085013 computer keyboard, keypad
n03089624 confectionery, confectionary, candy store
n03095699 container ship, containership, container vessel
n03100240 convertible
n03109150 corkscrew, bottle screw
n03110669 cornet, horn, trumpet, trump
n03124043 cowboy boot
n03124170 cowboy hat, ten-gallon hat
n03125729 cradle
n03126707 crane
n03127747 crash helmet
n03127925 crate
n03131574 crib, cot
n03133878 Crock Pot
n03134739 croquet ball
n03141823 crutch
n03146219 cuirass
n03160309 dam, dike, dyke
n03179701 desk
n03180011 desktop computer
n03187595 dial telephone, dial phone
n03188531 diaper, nappy, napkin
n03196217 digital clock
n03197337 digital watch
n03201208 dining table, board
n03207743 dishrag, dishcloth
n03207941 dishwasher, dish washer, dishwashing machine
n03208938 disk brake, disc brake
n03216828 dock, dockage, docking facility
n03218198 dogsled, dog sled, dog sleigh
n03220513 dome
n03223299 doormat, welcome mat
n03240683 drilling platform, offshore rig
n03249569 drum, membranophone, tympan
n03250847 drumstick
n03255030 dumbbell
n03259280 Dutch oven
n03271574 electric fan, blower
n03272010 electric guitar
n03272562 electric locomotive
n03290653 entertainment center
n03291819 envelope
n03297495 espresso maker
n03314780 face powder
n03325584 feather boa, boa
n03337140 file, file cabinet, filing cabinet
n03344393 fireboat
n03345487 fire engine, fire truck
n03347037 fire screen, fireguard
n03355925 flagpole, flagstaff
n03372029 flute, transverse flute
n03376595 folding chair
n03379051 football helmet
n03384352 forklift
n03388043 fountain
n03388183 fountain pen
n03388549 four-poster
n03393912 freight car
n03394916 French horn, horn
n03400231 frying pan, frypan, skillet
n03404251 fur coat
n03417042 garbage truck, dustcart
n03424325 gasmask, respirator, gas helmet
n03425413 gas pump, gasoline pump, petrol pump, island dispenser
n03443371 goblet
n03444034 go-kart
n03445777 golf ball
n03445924 golfcart, golf cart
n03447447 gondola
n03447721 gong, tam-tam
n03450230 gown
n03452741 grand piano, grand
n03457902 greenhouse, nursery, glasshouse
n03459775 grille, radiator grille
n03461385 grocery store, grocery, food market, market
n03467068 guillotine
n03476684 hair slide
n03476991 hair spray
n03478589 half track
n03481172 hammer
n03482405 hamper
n03483316 hand blower, blow dryer, blow drier, hair dryer, hair drier
n03485407 hand-held computer, hand-held microcomputer
n03485794 handkerchief, hankie, hanky, hankey
n03492542 hard disc, hard disk, fixed disk
n03494278 harmonica, mouth organ, harp, mouth harp
n03495258 harp
n03496892 harvester, reaper
n03498962 hatchet
n03527444 holster
n03529860 home theater, home theatre
n03530642 honeycomb
n03532672 hook, claw
n03534580 hoopskirt, crinoline
n03535780 horizontal bar, high bar
n03538406 horse cart, horse-cart
n03544143 hourglass
n03584254 iPod
n03584829 iron, smoothing iron
n03590841 jack-o'-lantern
n03594734 jean, blue jean, denim
n03594945 jeep, landrover
n03595614 jersey, T-shirt, tee shirt
n03598930 jigsaw puzzle
n03599486 jinrikisha, ricksha, rickshaw
n03602883 joystick
n03617480 kimono
n03623198 knee pad
n03627232 knot
n03630383 lab coat, laboratory coat
n03633091 ladle
n03637318 lampshade, lamp shade
n03642806 laptop, laptop computer
n03649909 lawn mower, mower
n03657121 lens cap, lens cover
n03658185 letter opener, paper knife, paperknife
n03661043 library
n03662601 lifeboat
n03666591 lighter, light, igniter, ignitor
n03670208 limousine, limo
n03673027 liner, ocean liner
n03676483 lipstick, lip rouge
n03680355 Loafer
n03690938 lotion
n03691459 loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
n03692522 loupe, jeweler's loupe
n03697007 lumbermill, sawmill
n03706229 magnetic compass
n03709823 mailbag, postbag
n03710193 mailbox, letter box
n03710637 maillot
n03710721 maillot, tank suit
n03717622 manhole cover
n03720891 maraca
n03721384 marimba, xylophone
n03724870 mask
n03729826 matchstick
n03733131 maypole
n03733281 maze, labyrinth
n03733805 measuring cup
n03742115 medicine chest, medicine cabinet
n03743016 megalith, megalithic structure
n03759954 microphone, mike
n03761084 microwave, microwave oven
n03763968 military uniform
n03764736 milk can
n03769881 minibus
n03770439 miniskirt, mini
n03770679 minivan
n03773504 missile
n03775071 mitten
n03775546 mixing bowl
n03776460 mobile home, manufactured home
n03777568 Model T
n03777754 modem
n03781244 monastery
n03782006 monitor
n03785016 moped
n03786901 mortar
n03787032 mortarboard
n03788195 mosque
n03788365 mosquito net
n03791053 motor scooter, scooter
n03792782 mountain bike, all-terrain bike, off-roader
n03792972 mountain tent
n03793489 mouse, computer mouse
n03794056 mousetrap
n03796401 moving van
n03803284 muzzle
n03804744 nail
n03814639 neck brace
n03814906 necklace
n03825788 nipple
n03832673 notebook, notebook computer
n03837869 obelisk
n03838899 oboe, hautboy, hautbois
n03840681 ocarina, sweet potato
n03841143 odometer, hodometer, mileometer, milometer
n03843555 oil filter
n03854065 organ, pipe organ
n03857828 oscilloscope, scope, cathode-ray oscilloscope, CRO
n03866082 overskirt
n03868242 oxcart
n03868863 oxygen mask
n03871628 packet
n03873416 paddle, boat paddle
n03874293 paddlewheel, paddle wheel
n03874599 padlock
n03876231 paintbrush
n03877472 pajama, pyjama, pj's, jammies
n03877845 palace
n03884397 panpipe, pandean pipe, syrinx
n03887697 paper towel
n03888257 parachute, chute
n03888605 parallel bars, bars
n03891251 park bench
n03891332 parking meter
n03895866 passenger car, coach, carriage
n03899768 patio, terrace
n03902125 pay-phone, pay-station
n03903868 pedestal, plinth, footstall
n03908618 pencil box, pencil case
n03908714 pencil sharpener
n03916031 perfume, essence
n03920288 Petri dish
n03924679 photocopier
n03929660 pick, plectrum, plectron
n03929855 pickelhaube
n03930313 picket fence, paling
n03930630 pickup, pickup truck
n03933933 pier
n03935335 piggy bank, penny bank
n03937543 pill bottle
n03938244 pillow
n03942813 ping-pong ball
n03944341 pinwheel
n03947888 pirate, pirate ship
n03950228 pitcher, ewer
n03954731 plane, carpenter's plane, woodworking plane
n03956157 planetarium
n03958227 plastic bag
n03961711 plate rack
n03967562 plow, plough
n03970156 plunger, plumber's helper
n03976467 Polaroid camera, Polaroid Land camera
n03976657 pole
n03977966 police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria
n03980874 poncho
n03982430 pool table, billiard table, snooker table
n03983396 pop bottle, soda bottle
n03991062 pot, flowerpot
n03992509 potter's wheel
n03995372 power drill
n03998194 prayer rug, prayer mat
n04004767 printer
n04005630 prison, prison house
n04008634 projectile, missile
n04009552 projector
n04019541 puck, hockey puck
n04023962 punching bag, punch bag, punching ball, punchball
n04026417 purse
n04033901 quill, quill pen
n04033995 quilt, comforter, comfort, puff
n04037443 racer, race car, racing car
n04039381 racket, racquet
n04040759 radiator
n04041544 radio, wireless
n04044716 radio telescope, radio reflector
n04049303 rain barrel
n04065272 recreational vehicle, RV, R.V.
n04067472 reel
n04069434 reflex camera
n04070727 refrigerator, icebox
n04074963 remote control, remote
n04081281 restaurant, eating house, eating place, eatery
n04086273 revolver, six-gun, six-shooter
n04090263 rifle
n04099969 rocking chair, rocker
n04111531 rotisserie
n04116512 rubber eraser, rubber, pencil eraser
n04118538 rugby ball
n04118776 rule, ruler
n04120489 running shoe
n04125021 safe
n04127249 safety pin
n04131690 saltshaker, salt shaker
n04133789 sandal
n04136333 sarong
n04141076 sax, saxophone
n04141327 scabbard
n04141975 scale, weighing machine
n04146614 school bus
n04147183 schooner
n04149813 scoreboard
n04152593 screen, CRT screen
n04153751 screw
n04154565 screwdriver
n04162706 seat belt, seatbelt
n04179913 sewing machine
n04192698 shield, buckler
n04200800 shoe shop, shoe-shop, shoe store
n04201297 shoji
n04204238 shopping basket
n04204347 shopping cart
n04208210 shovel
n04209133 shower cap
n04209239 shower curtain
n04228054 ski
n04229816 ski mask
n04235860 sleeping bag
n04238763 slide rule, slipstick
n04239074 sliding door
n04243546 slot, one-armed bandit
n04251144 snorkel
n04252077 snowmobile
n04252225 snowplow, snowplough
n04254120 soap dispenser
n04254680 soccer ball
n04254777 sock
n04258138 solar dish, solar collector, solar furnace
n04259630 sombrero
n04263257 soup bowl
n04264628 space bar
n04265275 space heater
n04266014 space shuttle
n04270147 spatula
n04273569 speedboat
n04275548 spider web, spider's web
n04277352 spindle
n04285008 sports car, sport car
n04286575 spotlight, spot
n04296562 stage
n04310018 steam locomotive
n04311004 steel arch bridge
n04311174 steel drum
n04317175 stethoscope
n04325704 stole
n04326547 stone wall
n04328186 stopwatch, stop watch
n04330267 stove
n04332243 strainer
n04335435 streetcar, tram, tramcar, trolley, trolley car
n04336792 stretcher
n04344873 studio couch, day bed
n04346328 stupa, tope
n04347754 submarine, pigboat, sub, U-boat
n04350905 suit, suit of clothes
n04355338 sundial
n04355933 sunglass
n04356056 sunglasses, dark glasses, shades
n04357314 sunscreen, sunblock, sun blocker
n04366367 suspension bridge
n04367480 swab, swob, mop
n04370456 sweatshirt
n04371430 swimming trunks, bathing trunks
n04371774 swing
n04372370 switch, electric switch, electrical switch
n04376876 syringe
n04380533 table lamp
n04389033 tank, army tank, armored combat vehicle, armoured combat vehicle
n04392985 tape player
n04398044 teapot
n04399382 teddy, teddy bear
n04404412 television, television system
n04409515 tennis ball
n04417672 thatch, thatched roof
n04418357 theater curtain, theatre curtain
n04423845 thimble
n04428191 thresher, thrasher, threshing machine
n04429376 throne
n04435653 tile roof
n04442312 toaster
n04443257 tobacco shop, tobacconist shop, tobacconist
n04447861 toilet seat
n04456115 torch
n04458633 totem pole
n04461696 tow truck, tow car, wrecker
n04462240 toyshop
n04465501 tractor
n04467665 trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi
n04476259 tray
n04479046 trench coat
n04482393 tricycle, trike, velocipede
n04483307 trimaran
n04485082 tripod
n04486054 triumphal arch
n04487081 trolleybus, trolley coach, trackless trolley
n04487394 trombone
n04493381 tub, vat
n04501370 turnstile
n04505470 typewriter keyboard
n04507155 umbrella
n04509417 unicycle, monocycle
n04515003 upright, upright piano
n04517823 vacuum, vacuum cleaner
n04522168 vase
n04523525 vault
n04525038 velvet
n04525305 vending machine
n04532106 vestment
n04532670 viaduct
n04536866 violin, fiddle
n04540053 volleyball
n04542943 waffle iron
n04548280 wall clock
n04548362 wallet, billfold, notecase, pocketbook
n04550184 wardrobe, closet, press
n04552348 warplane, military plane
n04553703 washbasin, handbasin, washbowl, lavabo, wash-hand basin
n04554684 washer, automatic washer, washing machine
n04557648 water bottle
n04560804 water jug
n04562935 water tower
n04579145 whiskey jug
n04579432 whistle
n04584207 wig
n04589890 window screen
n04590129 window shade
n04591157 Windsor tie
n04591713 wine bottle
n04592741 wing
n04596742 wok
n04597913 wooden spoon
n04599235 wool, woolen, woollen
n04604644 worm fence, snake fence, snake-rail fence, Virginia fence
n04606251 wreck
n04612504 yawl
n04613696 yurt
n06359193 web site, website, internet site, site
n06596364 comic book
n06785654 crossword puzzle, crossword
n06794110 street sign
n06874185 traffic light, traffic signal, stoplight
n07248320 book jacket, dust cover, dust jacket, dust wrapper
n07565083 menu
n07579787 plate
n07583066 guacamole
n07584110 consomme
n07590611 hot pot, hotpot
n07613480 trifle
n07614500 ice cream, icecream
n07615774 ice lolly, lolly, lollipop, popsicle
n07684084 French loaf
n07693725 bagel, beigel
n07695742 pretzel
n07697313 cheeseburger
n07697537 hotdog, hot dog, red hot
n07711569 mashed potato
n07714571 head cabbage
n07714990 broccoli
n07715103 cauliflower
n07716358 zucchini, courgette
n07716906 spaghetti squash
n07717410 acorn squash
n07717556 butternut squash
n07718472 cucumber, cuke
n07718747 artichoke, globe artichoke
n07720875 bell pepper
n07730033 cardoon
n07734744 mushroom
n07742313 Granny Smith
n07745940 strawberry
n07747607 orange
n07749582 lemon
n07753113 fig
n07753275 pineapple, ananas
n07753592 banana
n07754684 jackfruit, jak, jack
n07760859 custard apple
n07768694 pomegranate
n07802026 hay
n07831146 carbonara
n07836838 chocolate sauce, chocolate syrup
n07860988 dough
n07871810 meat loaf, meatloaf
n07873807 pizza, pizza pie
n07875152 potpie
n07880968 burrito
n07892512 red wine
n07920052 espresso
n07930864 cup
n07932039 eggnog
n09193705 alp
n09229709 bubble
n09246464 cliff, drop, drop-off
n09256479 coral reef
n09288635 geyser
n09332890 lakeside, lakeshore
n09399592 promontory, headland, head, foreland
n09421951 sandbar, sand bar
n09428293 seashore, coast, seacoast, sea-coast
n09468604 valley, vale
n09472597 volcano
n09835506 ballplayer, baseball player
n10148035 groom, bridegroom
n10565667 scuba diver
n11879895 rapeseed
n11939491 daisy
n12057211 yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum
n12144580 corn
n12267677 acorn
n12620546 hip, rose hip, rosehip
n12768682 buckeye, horse chestnut, conker
n12985857 coral fungus
n12998815 agaric
n13037406 gyromitra
n13040303 stinkhorn, carrion fungus
n13044778 earthstar
n13052670 hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa
n13054560 bolete
n13133613 ear, spike, capitulum
n15075141 toilet tissue, toilet paper, bathroom tissue
``` | mrm8488/ImageNet1K-val | [
"region:us"
] | 2022-04-27T18:05:28+00:00 | {} | 2022-04-27T18:16:51+00:00 | [] | [] | TAGS
#region-us
| mapping:
| [] | [
"TAGS\n#region-us \n"
] |
006c4bc87abe217e728bbd7bfcd76f9f828c11e2 | mapping:
```
n01440764 tench, Tinca tinca
n01443537 goldfish, Carassius auratus
n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
n01491361 tiger shark, Galeocerdo cuvieri
n01494475 hammerhead, hammerhead shark
n01496331 electric ray, crampfish, numbfish, torpedo
n01498041 stingray
n01514668 cock
n01514859 hen
n01518878 ostrich, Struthio camelus
n01530575 brambling, Fringilla montifringilla
n01531178 goldfinch, Carduelis carduelis
n01532829 house finch, linnet, Carpodacus mexicanus
n01534433 junco, snowbird
n01537544 indigo bunting, indigo finch, indigo bird, Passerina cyanea
n01558993 robin, American robin, Turdus migratorius
n01560419 bulbul
n01580077 jay
n01582220 magpie
n01592084 chickadee
n01601694 water ouzel, dipper
n01608432 kite
n01614925 bald eagle, American eagle, Haliaeetus leucocephalus
n01616318 vulture
n01622779 great grey owl, great gray owl, Strix nebulosa
n01629819 European fire salamander, Salamandra salamandra
n01630670 common newt, Triturus vulgaris
n01631663 eft
n01632458 spotted salamander, Ambystoma maculatum
n01632777 axolotl, mud puppy, Ambystoma mexicanum
n01641577 bullfrog, Rana catesbeiana
n01644373 tree frog, tree-frog
n01644900 tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
n01664065 loggerhead, loggerhead turtle, Caretta caretta
n01665541 leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
n01667114 mud turtle
n01667778 terrapin
n01669191 box turtle, box tortoise
n01675722 banded gecko
n01677366 common iguana, iguana, Iguana iguana
n01682714 American chameleon, anole, Anolis carolinensis
n01685808 whiptail, whiptail lizard
n01687978 agama
n01688243 frilled lizard, Chlamydosaurus kingi
n01689811 alligator lizard
n01692333 Gila monster, Heloderma suspectum
n01693334 green lizard, Lacerta viridis
n01694178 African chameleon, Chamaeleo chamaeleon
n01695060 Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis
n01697457 African crocodile, Nile crocodile, Crocodylus niloticus
n01698640 American alligator, Alligator mississipiensis
n01704323 triceratops
n01728572 thunder snake, worm snake, Carphophis amoenus
n01728920 ringneck snake, ring-necked snake, ring snake
n01729322 hognose snake, puff adder, sand viper
n01729977 green snake, grass snake
n01734418 king snake, kingsnake
n01735189 garter snake, grass snake
n01737021 water snake
n01739381 vine snake
n01740131 night snake, Hypsiglena torquata
n01742172 boa constrictor, Constrictor constrictor
n01744401 rock python, rock snake, Python sebae
n01748264 Indian cobra, Naja naja
n01749939 green mamba
n01751748 sea snake
n01753488 horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
n01755581 diamondback, diamondback rattlesnake, Crotalus adamanteus
n01756291 sidewinder, horned rattlesnake, Crotalus cerastes
n01768244 trilobite
n01770081 harvestman, daddy longlegs, Phalangium opilio
n01770393 scorpion
n01773157 black and gold garden spider, Argiope aurantia
n01773549 barn spider, Araneus cavaticus
n01773797 garden spider, Aranea diademata
n01774384 black widow, Latrodectus mactans
n01774750 tarantula
n01775062 wolf spider, hunting spider
n01776313 tick
n01784675 centipede
n01795545 black grouse
n01796340 ptarmigan
n01797886 ruffed grouse, partridge, Bonasa umbellus
n01798484 prairie chicken, prairie grouse, prairie fowl
n01806143 peacock
n01806567 quail
n01807496 partridge
n01817953 African grey, African gray, Psittacus erithacus
n01818515 macaw
n01819313 sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
n01820546 lorikeet
n01824575 coucal
n01828970 bee eater
n01829413 hornbill
n01833805 hummingbird
n01843065 jacamar
n01843383 toucan
n01847000 drake
n01855032 red-breasted merganser, Mergus serrator
n01855672 goose
n01860187 black swan, Cygnus atratus
n01871265 tusker
n01872401 echidna, spiny anteater, anteater
n01873310 platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus
n01877812 wallaby, brush kangaroo
n01882714 koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
n01883070 wombat
n01910747 jellyfish
n01914609 sea anemone, anemone
n01917289 brain coral
n01924916 flatworm, platyhelminth
n01930112 nematode, nematode worm, roundworm
n01943899 conch
n01944390 snail
n01945685 slug
n01950731 sea slug, nudibranch
n01955084 chiton, coat-of-mail shell, sea cradle, polyplacophore
n01968897 chambered nautilus, pearly nautilus, nautilus
n01978287 Dungeness crab, Cancer magister
n01978455 rock crab, Cancer irroratus
n01980166 fiddler crab
n01981276 king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica
n01983481 American lobster, Northern lobster, Maine lobster, Homarus americanus
n01984695 spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
n01985128 crayfish, crawfish, crawdad, crawdaddy
n01986214 hermit crab
n01990800 isopod
n02002556 white stork, Ciconia ciconia
n02002724 black stork, Ciconia nigra
n02006656 spoonbill
n02007558 flamingo
n02009229 little blue heron, Egretta caerulea
n02009912 American egret, great white heron, Egretta albus
n02011460 bittern
n02012849 crane
n02013706 limpkin, Aramus pictus
n02017213 European gallinule, Porphyrio porphyrio
n02018207 American coot, marsh hen, mud hen, water hen, Fulica americana
n02018795 bustard
n02025239 ruddy turnstone, Arenaria interpres
n02027492 red-backed sandpiper, dunlin, Erolia alpina
n02028035 redshank, Tringa totanus
n02033041 dowitcher
n02037110 oystercatcher, oyster catcher
n02051845 pelican
n02056570 king penguin, Aptenodytes patagonica
n02058221 albatross, mollymawk
n02066245 grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus
n02071294 killer whale, killer, orca, grampus, sea wolf, Orcinus orca
n02074367 dugong, Dugong dugon
n02077923 sea lion
n02085620 Chihuahua
n02085782 Japanese spaniel
n02085936 Maltese dog, Maltese terrier, Maltese
n02086079 Pekinese, Pekingese, Peke
n02086240 Shih-Tzu
n02086646 Blenheim spaniel
n02086910 papillon
n02087046 toy terrier
n02087394 Rhodesian ridgeback
n02088094 Afghan hound, Afghan
n02088238 basset, basset hound
n02088364 beagle
n02088466 bloodhound, sleuthhound
n02088632 bluetick
n02089078 black-and-tan coonhound
n02089867 Walker hound, Walker foxhound
n02089973 English foxhound
n02090379 redbone
n02090622 borzoi, Russian wolfhound
n02090721 Irish wolfhound
n02091032 Italian greyhound
n02091134 whippet
n02091244 Ibizan hound, Ibizan Podenco
n02091467 Norwegian elkhound, elkhound
n02091635 otterhound, otter hound
n02091831 Saluki, gazelle hound
n02092002 Scottish deerhound, deerhound
n02092339 Weimaraner
n02093256 Staffordshire bullterrier, Staffordshire bull terrier
n02093428 American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier
n02093647 Bedlington terrier
n02093754 Border terrier
n02093859 Kerry blue terrier
n02093991 Irish terrier
n02094114 Norfolk terrier
n02094258 Norwich terrier
n02094433 Yorkshire terrier
n02095314 wire-haired fox terrier
n02095570 Lakeland terrier
n02095889 Sealyham terrier, Sealyham
n02096051 Airedale, Airedale terrier
n02096177 cairn, cairn terrier
n02096294 Australian terrier
n02096437 Dandie Dinmont, Dandie Dinmont terrier
n02096585 Boston bull, Boston terrier
n02097047 miniature schnauzer
n02097130 giant schnauzer
n02097209 standard schnauzer
n02097298 Scotch terrier, Scottish terrier, Scottie
n02097474 Tibetan terrier, chrysanthemum dog
n02097658 silky terrier, Sydney silky
n02098105 soft-coated wheaten terrier
n02098286 West Highland white terrier
n02098413 Lhasa, Lhasa apso
n02099267 flat-coated retriever
n02099429 curly-coated retriever
n02099601 golden retriever
n02099712 Labrador retriever
n02099849 Chesapeake Bay retriever
n02100236 German short-haired pointer
n02100583 vizsla, Hungarian pointer
n02100735 English setter
n02100877 Irish setter, red setter
n02101006 Gordon setter
n02101388 Brittany spaniel
n02101556 clumber, clumber spaniel
n02102040 English springer, English springer spaniel
n02102177 Welsh springer spaniel
n02102318 cocker spaniel, English cocker spaniel, cocker
n02102480 Sussex spaniel
n02102973 Irish water spaniel
n02104029 kuvasz
n02104365 schipperke
n02105056 groenendael
n02105162 malinois
n02105251 briard
n02105412 kelpie
n02105505 komondor
n02105641 Old English sheepdog, bobtail
n02105855 Shetland sheepdog, Shetland sheep dog, Shetland
n02106030 collie
n02106166 Border collie
n02106382 Bouvier des Flandres, Bouviers des Flandres
n02106550 Rottweiler
n02106662 German shepherd, German shepherd dog, German police dog, alsatian
n02107142 Doberman, Doberman pinscher
n02107312 miniature pinscher
n02107574 Greater Swiss Mountain dog
n02107683 Bernese mountain dog
n02107908 Appenzeller
n02108000 EntleBucher
n02108089 boxer
n02108422 bull mastiff
n02108551 Tibetan mastiff
n02108915 French bulldog
n02109047 Great Dane
n02109525 Saint Bernard, St Bernard
n02109961 Eskimo dog, husky
n02110063 malamute, malemute, Alaskan malamute
n02110185 Siberian husky
n02110341 dalmatian, coach dog, carriage dog
n02110627 affenpinscher, monkey pinscher, monkey dog
n02110806 basenji
n02110958 pug, pug-dog
n02111129 Leonberg
n02111277 Newfoundland, Newfoundland dog
n02111500 Great Pyrenees
n02111889 Samoyed, Samoyede
n02112018 Pomeranian
n02112137 chow, chow chow
n02112350 keeshond
n02112706 Brabancon griffon
n02113023 Pembroke, Pembroke Welsh corgi
n02113186 Cardigan, Cardigan Welsh corgi
n02113624 toy poodle
n02113712 miniature poodle
n02113799 standard poodle
n02113978 Mexican hairless
n02114367 timber wolf, grey wolf, gray wolf, Canis lupus
n02114548 white wolf, Arctic wolf, Canis lupus tundrarum
n02114712 red wolf, maned wolf, Canis rufus, Canis niger
n02114855 coyote, prairie wolf, brush wolf, Canis latrans
n02115641 dingo, warrigal, warragal, Canis dingo
n02115913 dhole, Cuon alpinus
n02116738 African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
n02117135 hyena, hyaena
n02119022 red fox, Vulpes vulpes
n02119789 kit fox, Vulpes macrotis
n02120079 Arctic fox, white fox, Alopex lagopus
n02120505 grey fox, gray fox, Urocyon cinereoargenteus
n02123045 tabby, tabby cat
n02123159 tiger cat
n02123394 Persian cat
n02123597 Siamese cat, Siamese
n02124075 Egyptian cat
n02125311 cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
n02127052 lynx, catamount
n02128385 leopard, Panthera pardus
n02128757 snow leopard, ounce, Panthera uncia
n02128925 jaguar, panther, Panthera onca, Felis onca
n02129165 lion, king of beasts, Panthera leo
n02129604 tiger, Panthera tigris
n02130308 cheetah, chetah, Acinonyx jubatus
n02132136 brown bear, bruin, Ursus arctos
n02133161 American black bear, black bear, Ursus americanus, Euarctos americanus
n02134084 ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
n02134418 sloth bear, Melursus ursinus, Ursus ursinus
n02137549 mongoose
n02138441 meerkat, mierkat
n02165105 tiger beetle
n02165456 ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
n02167151 ground beetle, carabid beetle
n02168699 long-horned beetle, longicorn, longicorn beetle
n02169497 leaf beetle, chrysomelid
n02172182 dung beetle
n02174001 rhinoceros beetle
n02177972 weevil
n02190166 fly
n02206856 bee
n02219486 ant, emmet, pismire
n02226429 grasshopper, hopper
n02229544 cricket
n02231487 walking stick, walkingstick, stick insect
n02233338 cockroach, roach
n02236044 mantis, mantid
n02256656 cicada, cicala
n02259212 leafhopper
n02264363 lacewing, lacewing fly
n02268443 dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk
n02268853 damselfly
n02276258 admiral
n02277742 ringlet, ringlet butterfly
n02279972 monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
n02280649 cabbage butterfly
n02281406 sulphur butterfly, sulfur butterfly
n02281787 lycaenid, lycaenid butterfly
n02317335 starfish, sea star
n02319095 sea urchin
n02321529 sea cucumber, holothurian
n02325366 wood rabbit, cottontail, cottontail rabbit
n02326432 hare
n02328150 Angora, Angora rabbit
n02342885 hamster
n02346627 porcupine, hedgehog
n02356798 fox squirrel, eastern fox squirrel, Sciurus niger
n02361337 marmot
n02363005 beaver
n02364673 guinea pig, Cavia cobaya
n02389026 sorrel
n02391049 zebra
n02395406 hog, pig, grunter, squealer, Sus scrofa
n02396427 wild boar, boar, Sus scrofa
n02397096 warthog
n02398521 hippopotamus, hippo, river horse, Hippopotamus amphibius
n02403003 ox
n02408429 water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
n02410509 bison
n02412080 ram, tup
n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
n02417914 ibex, Capra ibex
n02422106 hartebeest
n02422699 impala, Aepyceros melampus
n02423022 gazelle
n02437312 Arabian camel, dromedary, Camelus dromedarius
n02437616 llama
n02441942 weasel
n02442845 mink
n02443114 polecat, fitch, foulmart, foumart, Mustela putorius
n02443484 black-footed ferret, ferret, Mustela nigripes
n02444819 otter
n02445715 skunk, polecat, wood pussy
n02447366 badger
n02454379 armadillo
n02457408 three-toed sloth, ai, Bradypus tridactylus
n02480495 orangutan, orang, orangutang, Pongo pygmaeus
n02480855 gorilla, Gorilla gorilla
n02481823 chimpanzee, chimp, Pan troglodytes
n02483362 gibbon, Hylobates lar
n02483708 siamang, Hylobates syndactylus, Symphalangus syndactylus
n02484975 guenon, guenon monkey
n02486261 patas, hussar monkey, Erythrocebus patas
n02486410 baboon
n02487347 macaque
n02488291 langur
n02488702 colobus, colobus monkey
n02489166 proboscis monkey, Nasalis larvatus
n02490219 marmoset
n02492035 capuchin, ringtail, Cebus capucinus
n02492660 howler monkey, howler
n02493509 titi, titi monkey
n02493793 spider monkey, Ateles geoffroyi
n02494079 squirrel monkey, Saimiri sciureus
n02497673 Madagascar cat, ring-tailed lemur, Lemur catta
n02500267 indri, indris, Indri indri, Indri brevicaudatus
n02504013 Indian elephant, Elephas maximus
n02504458 African elephant, Loxodonta africana
n02509815 lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
n02510455 giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
n02514041 barracouta, snoek
n02526121 eel
n02536864 coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch
n02606052 rock beauty, Holocanthus tricolor
n02607072 anemone fish
n02640242 sturgeon
n02641379 gar, garfish, garpike, billfish, Lepisosteus osseus
n02643566 lionfish
n02655020 puffer, pufferfish, blowfish, globefish
n02666196 abacus
n02667093 abaya
n02669723 academic gown, academic robe, judge's robe
n02672831 accordion, piano accordion, squeeze box
n02676566 acoustic guitar
n02687172 aircraft carrier, carrier, flattop, attack aircraft carrier
n02690373 airliner
n02692877 airship, dirigible
n02699494 altar
n02701002 ambulance
n02704792 amphibian, amphibious vehicle
n02708093 analog clock
n02727426 apiary, bee house
n02730930 apron
n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin
n02749479 assault rifle, assault gun
n02769748 backpack, back pack, knapsack, packsack, rucksack, haversack
n02776631 bakery, bakeshop, bakehouse
n02777292 balance beam, beam
n02782093 balloon
n02783161 ballpoint, ballpoint pen, ballpen, Biro
n02786058 Band Aid
n02787622 banjo
n02788148 bannister, banister, balustrade, balusters, handrail
n02790996 barbell
n02791124 barber chair
n02791270 barbershop
n02793495 barn
n02794156 barometer
n02795169 barrel, cask
n02797295 barrow, garden cart, lawn cart, wheelbarrow
n02799071 baseball
n02802426 basketball
n02804414 bassinet
n02804610 bassoon
n02807133 bathing cap, swimming cap
n02808304 bath towel
n02808440 bathtub, bathing tub, bath, tub
n02814533 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
n02814860 beacon, lighthouse, beacon light, pharos
n02815834 beaker
n02817516 bearskin, busby, shako
n02823428 beer bottle
n02823750 beer glass
n02825657 bell cote, bell cot
n02834397 bib
n02835271 bicycle-built-for-two, tandem bicycle, tandem
n02837789 bikini, two-piece
n02840245 binder, ring-binder
n02841315 binoculars, field glasses, opera glasses
n02843684 birdhouse
n02859443 boathouse
n02860847 bobsled, bobsleigh, bob
n02865351 bolo tie, bolo, bola tie, bola
n02869837 bonnet, poke bonnet
n02870880 bookcase
n02871525 bookshop, bookstore, bookstall
n02877765 bottlecap
n02879718 bow
n02883205 bow tie, bow-tie, bowtie
n02892201 brass, memorial tablet, plaque
n02892767 brassiere, bra, bandeau
n02894605 breakwater, groin, groyne, mole, bulwark, seawall, jetty
n02895154 breastplate, aegis, egis
n02906734 broom
n02909870 bucket, pail
n02910353 buckle
n02916936 bulletproof vest
n02917067 bullet train, bullet
n02927161 butcher shop, meat market
n02930766 cab, hack, taxi, taxicab
n02939185 caldron, cauldron
n02948072 candle, taper, wax light
n02950826 cannon
n02951358 canoe
n02951585 can opener, tin opener
n02963159 cardigan
n02965783 car mirror
n02966193 carousel, carrousel, merry-go-round, roundabout, whirligig
n02966687 carpenter's kit, tool kit
n02971356 carton
n02974003 car wheel
n02977058 cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM
n02978881 cassette
n02979186 cassette player
n02980441 castle
n02981792 catamaran
n02988304 CD player
n02992211 cello, violoncello
n02992529 cellular telephone, cellular phone, cellphone, cell, mobile phone
n02999410 chain
n03000134 chainlink fence
n03000247 chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour
n03000684 chain saw, chainsaw
n03014705 chest
n03016953 chiffonier, commode
n03017168 chime, bell, gong
n03018349 china cabinet, china closet
n03026506 Christmas stocking
n03028079 church, church building
n03032252 cinema, movie theater, movie theatre, movie house, picture palace
n03041632 cleaver, meat cleaver, chopper
n03042490 cliff dwelling
n03045698 cloak
n03047690 clog, geta, patten, sabot
n03062245 cocktail shaker
n03063599 coffee mug
n03063689 coffeepot
n03065424 coil, spiral, volute, whorl, helix
n03075370 combination lock
n03085013 computer keyboard, keypad
n03089624 confectionery, confectionary, candy store
n03095699 container ship, containership, container vessel
n03100240 convertible
n03109150 corkscrew, bottle screw
n03110669 cornet, horn, trumpet, trump
n03124043 cowboy boot
n03124170 cowboy hat, ten-gallon hat
n03125729 cradle
n03126707 crane
n03127747 crash helmet
n03127925 crate
n03131574 crib, cot
n03133878 Crock Pot
n03134739 croquet ball
n03141823 crutch
n03146219 cuirass
n03160309 dam, dike, dyke
n03179701 desk
n03180011 desktop computer
n03187595 dial telephone, dial phone
n03188531 diaper, nappy, napkin
n03196217 digital clock
n03197337 digital watch
n03201208 dining table, board
n03207743 dishrag, dishcloth
n03207941 dishwasher, dish washer, dishwashing machine
n03208938 disk brake, disc brake
n03216828 dock, dockage, docking facility
n03218198 dogsled, dog sled, dog sleigh
n03220513 dome
n03223299 doormat, welcome mat
n03240683 drilling platform, offshore rig
n03249569 drum, membranophone, tympan
n03250847 drumstick
n03255030 dumbbell
n03259280 Dutch oven
n03271574 electric fan, blower
n03272010 electric guitar
n03272562 electric locomotive
n03290653 entertainment center
n03291819 envelope
n03297495 espresso maker
n03314780 face powder
n03325584 feather boa, boa
n03337140 file, file cabinet, filing cabinet
n03344393 fireboat
n03345487 fire engine, fire truck
n03347037 fire screen, fireguard
n03355925 flagpole, flagstaff
n03372029 flute, transverse flute
n03376595 folding chair
n03379051 football helmet
n03384352 forklift
n03388043 fountain
n03388183 fountain pen
n03388549 four-poster
n03393912 freight car
n03394916 French horn, horn
n03400231 frying pan, frypan, skillet
n03404251 fur coat
n03417042 garbage truck, dustcart
n03424325 gasmask, respirator, gas helmet
n03425413 gas pump, gasoline pump, petrol pump, island dispenser
n03443371 goblet
n03444034 go-kart
n03445777 golf ball
n03445924 golfcart, golf cart
n03447447 gondola
n03447721 gong, tam-tam
n03450230 gown
n03452741 grand piano, grand
n03457902 greenhouse, nursery, glasshouse
n03459775 grille, radiator grille
n03461385 grocery store, grocery, food market, market
n03467068 guillotine
n03476684 hair slide
n03476991 hair spray
n03478589 half track
n03481172 hammer
n03482405 hamper
n03483316 hand blower, blow dryer, blow drier, hair dryer, hair drier
n03485407 hand-held computer, hand-held microcomputer
n03485794 handkerchief, hankie, hanky, hankey
n03492542 hard disc, hard disk, fixed disk
n03494278 harmonica, mouth organ, harp, mouth harp
n03495258 harp
n03496892 harvester, reaper
n03498962 hatchet
n03527444 holster
n03529860 home theater, home theatre
n03530642 honeycomb
n03532672 hook, claw
n03534580 hoopskirt, crinoline
n03535780 horizontal bar, high bar
n03538406 horse cart, horse-cart
n03544143 hourglass
n03584254 iPod
n03584829 iron, smoothing iron
n03590841 jack-o'-lantern
n03594734 jean, blue jean, denim
n03594945 jeep, landrover
n03595614 jersey, T-shirt, tee shirt
n03598930 jigsaw puzzle
n03599486 jinrikisha, ricksha, rickshaw
n03602883 joystick
n03617480 kimono
n03623198 knee pad
n03627232 knot
n03630383 lab coat, laboratory coat
n03633091 ladle
n03637318 lampshade, lamp shade
n03642806 laptop, laptop computer
n03649909 lawn mower, mower
n03657121 lens cap, lens cover
n03658185 letter opener, paper knife, paperknife
n03661043 library
n03662601 lifeboat
n03666591 lighter, light, igniter, ignitor
n03670208 limousine, limo
n03673027 liner, ocean liner
n03676483 lipstick, lip rouge
n03680355 Loafer
n03690938 lotion
n03691459 loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
n03692522 loupe, jeweler's loupe
n03697007 lumbermill, sawmill
n03706229 magnetic compass
n03709823 mailbag, postbag
n03710193 mailbox, letter box
n03710637 maillot
n03710721 maillot, tank suit
n03717622 manhole cover
n03720891 maraca
n03721384 marimba, xylophone
n03724870 mask
n03729826 matchstick
n03733131 maypole
n03733281 maze, labyrinth
n03733805 measuring cup
n03742115 medicine chest, medicine cabinet
n03743016 megalith, megalithic structure
n03759954 microphone, mike
n03761084 microwave, microwave oven
n03763968 military uniform
n03764736 milk can
n03769881 minibus
n03770439 miniskirt, mini
n03770679 minivan
n03773504 missile
n03775071 mitten
n03775546 mixing bowl
n03776460 mobile home, manufactured home
n03777568 Model T
n03777754 modem
n03781244 monastery
n03782006 monitor
n03785016 moped
n03786901 mortar
n03787032 mortarboard
n03788195 mosque
n03788365 mosquito net
n03791053 motor scooter, scooter
n03792782 mountain bike, all-terrain bike, off-roader
n03792972 mountain tent
n03793489 mouse, computer mouse
n03794056 mousetrap
n03796401 moving van
n03803284 muzzle
n03804744 nail
n03814639 neck brace
n03814906 necklace
n03825788 nipple
n03832673 notebook, notebook computer
n03837869 obelisk
n03838899 oboe, hautboy, hautbois
n03840681 ocarina, sweet potato
n03841143 odometer, hodometer, mileometer, milometer
n03843555 oil filter
n03854065 organ, pipe organ
n03857828 oscilloscope, scope, cathode-ray oscilloscope, CRO
n03866082 overskirt
n03868242 oxcart
n03868863 oxygen mask
n03871628 packet
n03873416 paddle, boat paddle
n03874293 paddlewheel, paddle wheel
n03874599 padlock
n03876231 paintbrush
n03877472 pajama, pyjama, pj's, jammies
n03877845 palace
n03884397 panpipe, pandean pipe, syrinx
n03887697 paper towel
n03888257 parachute, chute
n03888605 parallel bars, bars
n03891251 park bench
n03891332 parking meter
n03895866 passenger car, coach, carriage
n03899768 patio, terrace
n03902125 pay-phone, pay-station
n03903868 pedestal, plinth, footstall
n03908618 pencil box, pencil case
n03908714 pencil sharpener
n03916031 perfume, essence
n03920288 Petri dish
n03924679 photocopier
n03929660 pick, plectrum, plectron
n03929855 pickelhaube
n03930313 picket fence, paling
n03930630 pickup, pickup truck
n03933933 pier
n03935335 piggy bank, penny bank
n03937543 pill bottle
n03938244 pillow
n03942813 ping-pong ball
n03944341 pinwheel
n03947888 pirate, pirate ship
n03950228 pitcher, ewer
n03954731 plane, carpenter's plane, woodworking plane
n03956157 planetarium
n03958227 plastic bag
n03961711 plate rack
n03967562 plow, plough
n03970156 plunger, plumber's helper
n03976467 Polaroid camera, Polaroid Land camera
n03976657 pole
n03977966 police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria
n03980874 poncho
n03982430 pool table, billiard table, snooker table
n03983396 pop bottle, soda bottle
n03991062 pot, flowerpot
n03992509 potter's wheel
n03995372 power drill
n03998194 prayer rug, prayer mat
n04004767 printer
n04005630 prison, prison house
n04008634 projectile, missile
n04009552 projector
n04019541 puck, hockey puck
n04023962 punching bag, punch bag, punching ball, punchball
n04026417 purse
n04033901 quill, quill pen
n04033995 quilt, comforter, comfort, puff
n04037443 racer, race car, racing car
n04039381 racket, racquet
n04040759 radiator
n04041544 radio, wireless
n04044716 radio telescope, radio reflector
n04049303 rain barrel
n04065272 recreational vehicle, RV, R.V.
n04067472 reel
n04069434 reflex camera
n04070727 refrigerator, icebox
n04074963 remote control, remote
n04081281 restaurant, eating house, eating place, eatery
n04086273 revolver, six-gun, six-shooter
n04090263 rifle
n04099969 rocking chair, rocker
n04111531 rotisserie
n04116512 rubber eraser, rubber, pencil eraser
n04118538 rugby ball
n04118776 rule, ruler
n04120489 running shoe
n04125021 safe
n04127249 safety pin
n04131690 saltshaker, salt shaker
n04133789 sandal
n04136333 sarong
n04141076 sax, saxophone
n04141327 scabbard
n04141975 scale, weighing machine
n04146614 school bus
n04147183 schooner
n04149813 scoreboard
n04152593 screen, CRT screen
n04153751 screw
n04154565 screwdriver
n04162706 seat belt, seatbelt
n04179913 sewing machine
n04192698 shield, buckler
n04200800 shoe shop, shoe-shop, shoe store
n04201297 shoji
n04204238 shopping basket
n04204347 shopping cart
n04208210 shovel
n04209133 shower cap
n04209239 shower curtain
n04228054 ski
n04229816 ski mask
n04235860 sleeping bag
n04238763 slide rule, slipstick
n04239074 sliding door
n04243546 slot, one-armed bandit
n04251144 snorkel
n04252077 snowmobile
n04252225 snowplow, snowplough
n04254120 soap dispenser
n04254680 soccer ball
n04254777 sock
n04258138 solar dish, solar collector, solar furnace
n04259630 sombrero
n04263257 soup bowl
n04264628 space bar
n04265275 space heater
n04266014 space shuttle
n04270147 spatula
n04273569 speedboat
n04275548 spider web, spider's web
n04277352 spindle
n04285008 sports car, sport car
n04286575 spotlight, spot
n04296562 stage
n04310018 steam locomotive
n04311004 steel arch bridge
n04311174 steel drum
n04317175 stethoscope
n04325704 stole
n04326547 stone wall
n04328186 stopwatch, stop watch
n04330267 stove
n04332243 strainer
n04335435 streetcar, tram, tramcar, trolley, trolley car
n04336792 stretcher
n04344873 studio couch, day bed
n04346328 stupa, tope
n04347754 submarine, pigboat, sub, U-boat
n04350905 suit, suit of clothes
n04355338 sundial
n04355933 sunglass
n04356056 sunglasses, dark glasses, shades
n04357314 sunscreen, sunblock, sun blocker
n04366367 suspension bridge
n04367480 swab, swob, mop
n04370456 sweatshirt
n04371430 swimming trunks, bathing trunks
n04371774 swing
n04372370 switch, electric switch, electrical switch
n04376876 syringe
n04380533 table lamp
n04389033 tank, army tank, armored combat vehicle, armoured combat vehicle
n04392985 tape player
n04398044 teapot
n04399382 teddy, teddy bear
n04404412 television, television system
n04409515 tennis ball
n04417672 thatch, thatched roof
n04418357 theater curtain, theatre curtain
n04423845 thimble
n04428191 thresher, thrasher, threshing machine
n04429376 throne
n04435653 tile roof
n04442312 toaster
n04443257 tobacco shop, tobacconist shop, tobacconist
n04447861 toilet seat
n04456115 torch
n04458633 totem pole
n04461696 tow truck, tow car, wrecker
n04462240 toyshop
n04465501 tractor
n04467665 trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi
n04476259 tray
n04479046 trench coat
n04482393 tricycle, trike, velocipede
n04483307 trimaran
n04485082 tripod
n04486054 triumphal arch
n04487081 trolleybus, trolley coach, trackless trolley
n04487394 trombone
n04493381 tub, vat
n04501370 turnstile
n04505470 typewriter keyboard
n04507155 umbrella
n04509417 unicycle, monocycle
n04515003 upright, upright piano
n04517823 vacuum, vacuum cleaner
n04522168 vase
n04523525 vault
n04525038 velvet
n04525305 vending machine
n04532106 vestment
n04532670 viaduct
n04536866 violin, fiddle
n04540053 volleyball
n04542943 waffle iron
n04548280 wall clock
n04548362 wallet, billfold, notecase, pocketbook
n04550184 wardrobe, closet, press
n04552348 warplane, military plane
n04553703 washbasin, handbasin, washbowl, lavabo, wash-hand basin
n04554684 washer, automatic washer, washing machine
n04557648 water bottle
n04560804 water jug
n04562935 water tower
n04579145 whiskey jug
n04579432 whistle
n04584207 wig
n04589890 window screen
n04590129 window shade
n04591157 Windsor tie
n04591713 wine bottle
n04592741 wing
n04596742 wok
n04597913 wooden spoon
n04599235 wool, woolen, woollen
n04604644 worm fence, snake fence, snake-rail fence, Virginia fence
n04606251 wreck
n04612504 yawl
n04613696 yurt
n06359193 web site, website, internet site, site
n06596364 comic book
n06785654 crossword puzzle, crossword
n06794110 street sign
n06874185 traffic light, traffic signal, stoplight
n07248320 book jacket, dust cover, dust jacket, dust wrapper
n07565083 menu
n07579787 plate
n07583066 guacamole
n07584110 consomme
n07590611 hot pot, hotpot
n07613480 trifle
n07614500 ice cream, icecream
n07615774 ice lolly, lolly, lollipop, popsicle
n07684084 French loaf
n07693725 bagel, beigel
n07695742 pretzel
n07697313 cheeseburger
n07697537 hotdog, hot dog, red hot
n07711569 mashed potato
n07714571 head cabbage
n07714990 broccoli
n07715103 cauliflower
n07716358 zucchini, courgette
n07716906 spaghetti squash
n07717410 acorn squash
n07717556 butternut squash
n07718472 cucumber, cuke
n07718747 artichoke, globe artichoke
n07720875 bell pepper
n07730033 cardoon
n07734744 mushroom
n07742313 Granny Smith
n07745940 strawberry
n07747607 orange
n07749582 lemon
n07753113 fig
n07753275 pineapple, ananas
n07753592 banana
n07754684 jackfruit, jak, jack
n07760859 custard apple
n07768694 pomegranate
n07802026 hay
n07831146 carbonara
n07836838 chocolate sauce, chocolate syrup
n07860988 dough
n07871810 meat loaf, meatloaf
n07873807 pizza, pizza pie
n07875152 potpie
n07880968 burrito
n07892512 red wine
n07920052 espresso
n07930864 cup
n07932039 eggnog
n09193705 alp
n09229709 bubble
n09246464 cliff, drop, drop-off
n09256479 coral reef
n09288635 geyser
n09332890 lakeside, lakeshore
n09399592 promontory, headland, head, foreland
n09421951 sandbar, sand bar
n09428293 seashore, coast, seacoast, sea-coast
n09468604 valley, vale
n09472597 volcano
n09835506 ballplayer, baseball player
n10148035 groom, bridegroom
n10565667 scuba diver
n11879895 rapeseed
n11939491 daisy
n12057211 yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum
n12144580 corn
n12267677 acorn
n12620546 hip, rose hip, rosehip
n12768682 buckeye, horse chestnut, conker
n12985857 coral fungus
n12998815 agaric
n13037406 gyromitra
n13040303 stinkhorn, carrion fungus
n13044778 earthstar
n13052670 hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa
n13054560 bolete
n13133613 ear, spike, capitulum
n15075141 toilet tissue, toilet paper, bathroom tissue
``` | mrm8488/ImageNet1K-train | [
"region:us"
] | 2022-04-27T19:03:48+00:00 | {} | 2022-04-28T10:06:11+00:00 | [] | [] | TAGS
#region-us
| mapping:
| [] | [
"TAGS\n#region-us \n"
] |
ff6bd8e4b27c3543e4f8fe2108f32bb95a6f8740 |
# MASSIVE 1.1: A 1M-Example Multilingual Natural Language Understanding Dataset with 52 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE 1.1 | 52 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Catalan - Spain (ca-ES)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("AmazonScience/massive", "en-US", split='train')
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "0",
"locale": "fr-FR",
"partition": "test",
"scenario": "alarm",
"intent": "alarm_set",
"utt": "réveille-moi à cinq heures du matin cette semaine",
"annot_utt": "réveille-moi à [time : cinq heures du matin] [date : cette semaine]",
"worker_id": "22",
"slot_method": [
{ "slot": "time", "method": "translation" },
{ "slot": "date", "method": "translation" }
],
"judgments": [
{
"worker_id": "22",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "8",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "0",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
}
]
}
```
### Data Fields
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|ca-ES|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face Upload and Integration__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following papers when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
| AmazonScience/massive | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:af-ZA",
"multilinguality:am-ET",
"multilinguality:ar-SA",
"multilinguality:az-AZ",
"multilinguality:bn-BD",
"multilinguality:ca-ES",
"multilinguality:cy-GB",
"multilinguality:da-DK",
"multilinguality:de-DE",
"multilinguality:el-GR",
"multilinguality:en-US",
"multilinguality:es-ES",
"multilinguality:fa-IR",
"multilinguality:fi-FI",
"multilinguality:fr-FR",
"multilinguality:he-IL",
"multilinguality:hi-IN",
"multilinguality:hu-HU",
"multilinguality:hy-AM",
"multilinguality:id-ID",
"multilinguality:is-IS",
"multilinguality:it-IT",
"multilinguality:ja-JP",
"multilinguality:jv-ID",
"multilinguality:ka-GE",
"multilinguality:km-KH",
"multilinguality:kn-IN",
"multilinguality:ko-KR",
"multilinguality:lv-LV",
"multilinguality:ml-IN",
"multilinguality:mn-MN",
"multilinguality:ms-MY",
"multilinguality:my-MM",
"multilinguality:nb-NO",
"multilinguality:nl-NL",
"multilinguality:pl-PL",
"multilinguality:pt-PT",
"multilinguality:ro-RO",
"multilinguality:ru-RU",
"multilinguality:sl-SL",
"multilinguality:sq-AL",
"multilinguality:sv-SE",
"multilinguality:sw-KE",
"multilinguality:ta-IN",
"multilinguality:te-IN",
"multilinguality:th-TH",
"multilinguality:tl-PH",
"multilinguality:tr-TR",
"multilinguality:ur-PK",
"multilinguality:vi-VN",
"multilinguality:zh-CN",
"multilinguality:zh-TW",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:cc-by-4.0",
"natural-language-understanding",
"arxiv:2204.08582",
"region:us"
] | 2022-04-27T19:48:46+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "license": ["cc-by-4.0"], "multilinguality": ["af-ZA", "am-ET", "ar-SA", "az-AZ", "bn-BD", "ca-ES", "cy-GB", "da-DK", "de-DE", "el-GR", "en-US", "es-ES", "fa-IR", "fi-FI", "fr-FR", "he-IL", "hi-IN", "hu-HU", "hy-AM", "id-ID", "is-IS", "it-IT", "ja-JP", "jv-ID", "ka-GE", "km-KH", "kn-IN", "ko-KR", "lv-LV", "ml-IN", "mn-MN", "ms-MY", "my-MM", "nb-NO", "nl-NL", "pl-PL", "pt-PT", "ro-RO", "ru-RU", "sl-SL", "sq-AL", "sv-SE", "sw-KE", "ta-IN", "te-IN", "th-TH", "tl-PH", "tr-TR", "ur-PK", "vi-VN", "zh-CN", "zh-TW"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-class-classification"], "paperswithcode_id": "massive", "pretty_name": "MASSIVE", "language_bcp47": ["af-ZA", "am-ET", "ar-SA", "az-AZ", "bn-BD", "ca-ES", "cy-GB", "da-DK", "de-DE", "el-GR", "en-US", "es-ES", "fa-IR", "fi-FI", "fr-FR", "he-IL", "hi-IN", "hu-HU", "hy-AM", "id-ID", "is-IS", "it-IT", "ja-JP", "jv-ID", "ka-GE", "km-KH", "kn-IN", "ko-KR", "lv-LV", "ml-IN", "mn-MN", "ms-MY", "my-MM", "nb-NO", "nl-NL", "pl-PL", "pt-PT", "ro-RO", "ru-RU", "sl-SL", "sq-AL", "sv-SE", "sw-KE", "ta-IN", "te-IN", "th-TH", "tl-PH", "tr-TR", "ur-PK", "vi-VN", "zh-CN", "zh-TW"], "tags": ["natural-language-understanding"]} | 2022-11-16T15:44:51+00:00 | [
"2204.08582"
] | [] | TAGS
#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-af-ZA #multilinguality-am-ET #multilinguality-ar-SA #multilinguality-az-AZ #multilinguality-bn-BD #multilinguality-ca-ES #multilinguality-cy-GB #multilinguality-da-DK #multilinguality-de-DE #multilinguality-el-GR #multilinguality-en-US #multilinguality-es-ES #multilinguality-fa-IR #multilinguality-fi-FI #multilinguality-fr-FR #multilinguality-he-IL #multilinguality-hi-IN #multilinguality-hu-HU #multilinguality-hy-AM #multilinguality-id-ID #multilinguality-is-IS #multilinguality-it-IT #multilinguality-ja-JP #multilinguality-jv-ID #multilinguality-ka-GE #multilinguality-km-KH #multilinguality-kn-IN #multilinguality-ko-KR #multilinguality-lv-LV #multilinguality-ml-IN #multilinguality-mn-MN #multilinguality-ms-MY #multilinguality-my-MM #multilinguality-nb-NO #multilinguality-nl-NL #multilinguality-pl-PL #multilinguality-pt-PT #multilinguality-ro-RO #multilinguality-ru-RU #multilinguality-sl-SL #multilinguality-sq-AL #multilinguality-sv-SE #multilinguality-sw-KE #multilinguality-ta-IN #multilinguality-te-IN #multilinguality-th-TH #multilinguality-tl-PH #multilinguality-tr-TR #multilinguality-ur-PK #multilinguality-vi-VN #multilinguality-zh-CN #multilinguality-zh-TW #size_categories-100K<n<1M #source_datasets-original #license-cc-by-4.0 #natural-language-understanding #arxiv-2204.08582 #region-us
| MASSIVE 1.1: A 1M-Example Multilingual Natural Language Understanding Dataset with 52 Typologically-Diverse Languages
=====================================================================================================================
Table of Contents
-----------------
* [Dataset Card for](#dataset-card-for-needs-more-information)
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- No Warranty
- Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard: URL
* Point of Contact: GitHub
### Dataset Summary
MASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for 'natural-language-understanding' (NLU) :
* 'intent-classification'
* 'multi-class-classification'
* 'natural-language-understanding'
### Languages
The MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :
* 'Afrikaans - South Africa (af-ZA)'
* 'Amharic - Ethiopia (am-ET)'
* 'Arabic - Saudi Arabia (ar-SA)'
* 'Azeri - Azerbaijan (az-AZ)'
* 'Bengali - Bangladesh (bn-BD)'
* 'Catalan - Spain (ca-ES)'
* 'Chinese - China (zh-CN)'
* 'Chinese - Taiwan (zh-TW)'
* 'Danish - Denmark (da-DK)'
* 'German - Germany (de-DE)'
* 'Greek - Greece (el-GR)'
* 'English - United States (en-US)'
* 'Spanish - Spain (es-ES)'
* 'Farsi - Iran (fa-IR)'
* 'Finnish - Finland (fi-FI)'
* 'French - France (fr-FR)'
* 'Hebrew - Israel (he-IL)'
* 'Hungarian - Hungary (hu-HU)'
* 'Armenian - Armenia (hy-AM)'
* 'Indonesian - Indonesia (id-ID)'
* 'Icelandic - Iceland (is-IS)'
* 'Italian - Italy (it-IT)'
* 'Japanese - Japan (ja-JP)'
* 'Javanese - Indonesia (jv-ID)'
* 'Georgian - Georgia (ka-GE)'
* 'Khmer - Cambodia (km-KH)'
* 'Korean - Korea (ko-KR)'
* 'Latvian - Latvia (lv-LV)'
* 'Mongolian - Mongolia (mn-MN)'
* 'Malay - Malaysia (ms-MY)'
* 'Burmese - Myanmar (my-MM)'
* 'Norwegian - Norway (nb-NO)'
* 'Dutch - Netherlands (nl-NL)'
* 'Polish - Poland (pl-PL)'
* 'Portuguese - Portugal (pt-PT)'
* 'Romanian - Romania (ro-RO)'
* 'Russian - Russia (ru-RU)'
* 'Slovanian - Slovania (sl-SL)'
* 'Albanian - Albania (sq-AL)'
* 'Swedish - Sweden (sv-SE)'
* 'Swahili - Kenya (sw-KE)'
* 'Hindi - India (hi-IN)'
* 'Kannada - India (kn-IN)'
* 'Malayalam - India (ml-IN)'
* 'Tamil - India (ta-IN)'
* 'Telugu - India (te-IN)'
* 'Thai - Thailand (th-TH)'
* 'Tagalog - Philippines (tl-PH)'
* 'Turkish - Turkey (tr-TR)'
* 'Urdu - Pakistan (ur-PK)'
* 'Vietnamese - Vietnam (vi-VN)'
* 'Welsh - United Kingdom (cy-GB)'
Load the dataset with HuggingFace
---------------------------------
Dataset Structure
-----------------
### Data Instances
### Data Fields
'id': maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
'locale': is the language and country code accoring to ISO-639-1 and ISO-3166.
'partition': is either 'train', 'dev', or 'test', according to the original split in SLURP.
'scenario': is the general domain, aka "scenario" in SLURP terminology, of an utterance
'intent': is the specific intent of an utterance within a domain formatted as '{scenario}\_{intent}'
'utt': the raw utterance text without annotations
'annot\_utt': the text from 'utt' with slot annotations formatted as '[{label} : {entity}]'
'worker\_id': The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
'slot\_method': for each slot in the utterance, whether that slot was a 'translation' (i.e., same expression just in the target language), 'localization' (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or 'unchanged' (i.e., the original en-US slot value was copied over without modification).
'judgments': Each judgment collected for the localized utterance has 6 keys. 'worker\_id' is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
### Data Splits
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
Additional Information
----------------------
### Dataset Curators
**MASSIVE**: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
**SLURP**: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
**Hugging Face Upload and Integration**: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
Please cite the following papers when using this dataset.
| [
"### Dataset Summary\n\n\nMASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for 'natural-language-understanding' (NLU) :\n\n\n* 'intent-classification'\n* 'multi-class-classification'\n* 'natural-language-understanding'",
"### Languages\n\n\nThe MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :\n\n\n* 'Afrikaans - South Africa (af-ZA)'\n* 'Amharic - Ethiopia (am-ET)'\n* 'Arabic - Saudi Arabia (ar-SA)'\n* 'Azeri - Azerbaijan (az-AZ)'\n* 'Bengali - Bangladesh (bn-BD)'\n* 'Catalan - Spain (ca-ES)'\n* 'Chinese - China (zh-CN)'\n* 'Chinese - Taiwan (zh-TW)'\n* 'Danish - Denmark (da-DK)'\n* 'German - Germany (de-DE)'\n* 'Greek - Greece (el-GR)'\n* 'English - United States (en-US)'\n* 'Spanish - Spain (es-ES)'\n* 'Farsi - Iran (fa-IR)'\n* 'Finnish - Finland (fi-FI)'\n* 'French - France (fr-FR)'\n* 'Hebrew - Israel (he-IL)'\n* 'Hungarian - Hungary (hu-HU)'\n* 'Armenian - Armenia (hy-AM)'\n* 'Indonesian - Indonesia (id-ID)'\n* 'Icelandic - Iceland (is-IS)'\n* 'Italian - Italy (it-IT)'\n* 'Japanese - Japan (ja-JP)'\n* 'Javanese - Indonesia (jv-ID)'\n* 'Georgian - Georgia (ka-GE)'\n* 'Khmer - Cambodia (km-KH)'\n* 'Korean - Korea (ko-KR)'\n* 'Latvian - Latvia (lv-LV)'\n* 'Mongolian - Mongolia (mn-MN)'\n* 'Malay - Malaysia (ms-MY)'\n* 'Burmese - Myanmar (my-MM)'\n* 'Norwegian - Norway (nb-NO)'\n* 'Dutch - Netherlands (nl-NL)'\n* 'Polish - Poland (pl-PL)'\n* 'Portuguese - Portugal (pt-PT)'\n* 'Romanian - Romania (ro-RO)'\n* 'Russian - Russia (ru-RU)'\n* 'Slovanian - Slovania (sl-SL)'\n* 'Albanian - Albania (sq-AL)'\n* 'Swedish - Sweden (sv-SE)'\n* 'Swahili - Kenya (sw-KE)'\n* 'Hindi - India (hi-IN)'\n* 'Kannada - India (kn-IN)'\n* 'Malayalam - India (ml-IN)'\n* 'Tamil - India (ta-IN)'\n* 'Telugu - India (te-IN)'\n* 'Thai - Thailand (th-TH)'\n* 'Tagalog - Philippines (tl-PH)'\n* 'Turkish - Turkey (tr-TR)'\n* 'Urdu - Pakistan (ur-PK)'\n* 'Vietnamese - Vietnam (vi-VN)'\n* 'Welsh - United Kingdom (cy-GB)'\n\n\nLoad the dataset with HuggingFace\n---------------------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n'id': maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.\n\n\n'locale': is the language and country code accoring to ISO-639-1 and ISO-3166.\n\n\n'partition': is either 'train', 'dev', or 'test', according to the original split in SLURP.\n\n\n'scenario': is the general domain, aka \"scenario\" in SLURP terminology, of an utterance\n\n\n'intent': is the specific intent of an utterance within a domain formatted as '{scenario}\\_{intent}'\n\n\n'utt': the raw utterance text without annotations\n\n\n'annot\\_utt': the text from 'utt' with slot annotations formatted as '[{label} : {entity}]'\n\n\n'worker\\_id': The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.\n\n\n'slot\\_method': for each slot in the utterance, whether that slot was a 'translation' (i.e., same expression just in the target language), 'localization' (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or 'unchanged' (i.e., the original en-US slot value was copied over without modification).\n\n\n'judgments': Each judgment collected for the localized utterance has 6 keys. 'worker\\_id' is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.",
"### Data Splits",
"### Personal and Sensitive Information\n\n\nThe corpora is free of personal or sensitive information.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n**MASSIVE**: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.\n\n\n**SLURP**: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.\n\n\n**Hugging Face Upload and Integration**: Labrak Yanis (Not affiliated with the original corpus)",
"### Licensing Information\n\n\nPlease cite the following papers when using this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-af-ZA #multilinguality-am-ET #multilinguality-ar-SA #multilinguality-az-AZ #multilinguality-bn-BD #multilinguality-ca-ES #multilinguality-cy-GB #multilinguality-da-DK #multilinguality-de-DE #multilinguality-el-GR #multilinguality-en-US #multilinguality-es-ES #multilinguality-fa-IR #multilinguality-fi-FI #multilinguality-fr-FR #multilinguality-he-IL #multilinguality-hi-IN #multilinguality-hu-HU #multilinguality-hy-AM #multilinguality-id-ID #multilinguality-is-IS #multilinguality-it-IT #multilinguality-ja-JP #multilinguality-jv-ID #multilinguality-ka-GE #multilinguality-km-KH #multilinguality-kn-IN #multilinguality-ko-KR #multilinguality-lv-LV #multilinguality-ml-IN #multilinguality-mn-MN #multilinguality-ms-MY #multilinguality-my-MM #multilinguality-nb-NO #multilinguality-nl-NL #multilinguality-pl-PL #multilinguality-pt-PT #multilinguality-ro-RO #multilinguality-ru-RU #multilinguality-sl-SL #multilinguality-sq-AL #multilinguality-sv-SE #multilinguality-sw-KE #multilinguality-ta-IN #multilinguality-te-IN #multilinguality-th-TH #multilinguality-tl-PH #multilinguality-tr-TR #multilinguality-ur-PK #multilinguality-vi-VN #multilinguality-zh-CN #multilinguality-zh-TW #size_categories-100K<n<1M #source_datasets-original #license-cc-by-4.0 #natural-language-understanding #arxiv-2204.08582 #region-us \n",
"### Dataset Summary\n\n\nMASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for 'natural-language-understanding' (NLU) :\n\n\n* 'intent-classification'\n* 'multi-class-classification'\n* 'natural-language-understanding'",
"### Languages\n\n\nThe MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :\n\n\n* 'Afrikaans - South Africa (af-ZA)'\n* 'Amharic - Ethiopia (am-ET)'\n* 'Arabic - Saudi Arabia (ar-SA)'\n* 'Azeri - Azerbaijan (az-AZ)'\n* 'Bengali - Bangladesh (bn-BD)'\n* 'Catalan - Spain (ca-ES)'\n* 'Chinese - China (zh-CN)'\n* 'Chinese - Taiwan (zh-TW)'\n* 'Danish - Denmark (da-DK)'\n* 'German - Germany (de-DE)'\n* 'Greek - Greece (el-GR)'\n* 'English - United States (en-US)'\n* 'Spanish - Spain (es-ES)'\n* 'Farsi - Iran (fa-IR)'\n* 'Finnish - Finland (fi-FI)'\n* 'French - France (fr-FR)'\n* 'Hebrew - Israel (he-IL)'\n* 'Hungarian - Hungary (hu-HU)'\n* 'Armenian - Armenia (hy-AM)'\n* 'Indonesian - Indonesia (id-ID)'\n* 'Icelandic - Iceland (is-IS)'\n* 'Italian - Italy (it-IT)'\n* 'Japanese - Japan (ja-JP)'\n* 'Javanese - Indonesia (jv-ID)'\n* 'Georgian - Georgia (ka-GE)'\n* 'Khmer - Cambodia (km-KH)'\n* 'Korean - Korea (ko-KR)'\n* 'Latvian - Latvia (lv-LV)'\n* 'Mongolian - Mongolia (mn-MN)'\n* 'Malay - Malaysia (ms-MY)'\n* 'Burmese - Myanmar (my-MM)'\n* 'Norwegian - Norway (nb-NO)'\n* 'Dutch - Netherlands (nl-NL)'\n* 'Polish - Poland (pl-PL)'\n* 'Portuguese - Portugal (pt-PT)'\n* 'Romanian - Romania (ro-RO)'\n* 'Russian - Russia (ru-RU)'\n* 'Slovanian - Slovania (sl-SL)'\n* 'Albanian - Albania (sq-AL)'\n* 'Swedish - Sweden (sv-SE)'\n* 'Swahili - Kenya (sw-KE)'\n* 'Hindi - India (hi-IN)'\n* 'Kannada - India (kn-IN)'\n* 'Malayalam - India (ml-IN)'\n* 'Tamil - India (ta-IN)'\n* 'Telugu - India (te-IN)'\n* 'Thai - Thailand (th-TH)'\n* 'Tagalog - Philippines (tl-PH)'\n* 'Turkish - Turkey (tr-TR)'\n* 'Urdu - Pakistan (ur-PK)'\n* 'Vietnamese - Vietnam (vi-VN)'\n* 'Welsh - United Kingdom (cy-GB)'\n\n\nLoad the dataset with HuggingFace\n---------------------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n'id': maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.\n\n\n'locale': is the language and country code accoring to ISO-639-1 and ISO-3166.\n\n\n'partition': is either 'train', 'dev', or 'test', according to the original split in SLURP.\n\n\n'scenario': is the general domain, aka \"scenario\" in SLURP terminology, of an utterance\n\n\n'intent': is the specific intent of an utterance within a domain formatted as '{scenario}\\_{intent}'\n\n\n'utt': the raw utterance text without annotations\n\n\n'annot\\_utt': the text from 'utt' with slot annotations formatted as '[{label} : {entity}]'\n\n\n'worker\\_id': The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.\n\n\n'slot\\_method': for each slot in the utterance, whether that slot was a 'translation' (i.e., same expression just in the target language), 'localization' (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or 'unchanged' (i.e., the original en-US slot value was copied over without modification).\n\n\n'judgments': Each judgment collected for the localized utterance has 6 keys. 'worker\\_id' is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.",
"### Data Splits",
"### Personal and Sensitive Information\n\n\nThe corpora is free of personal or sensitive information.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n**MASSIVE**: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.\n\n\n**SLURP**: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.\n\n\n**Hugging Face Upload and Integration**: Labrak Yanis (Not affiliated with the original corpus)",
"### Licensing Information\n\n\nPlease cite the following papers when using this dataset."
] |
3a14d4d277b7ad97c668740aa9d0affadbf3fe8d | # CodeParrot 🦜 Dataset Cleaned and filtered (validation)
## Dataset Description
A dataset of Python files from Github. It is a more filtered version of the validation split [codeparrot-clean-valid](https://huggingface.co/datasets/codeparrot/codeparrot-clean-valid) of [codeparrot-clean](https://huggingface.co/datasets/codeparrot/codeparrot-clean#codeparrot-%F0%9F%A6%9C-dataset-cleaned). The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:
- files with a mention of "test file" or "configuration file" or similar in the first 5 lines
- files with high occurence of the keywords "test " or "config"
- files without a mention of the keywords `def`, `for`, `while` and `class`
- files that use the assignment operator ```=``` less than 5 times
- files with ratio between number of characters and number of tokens after tokenization < 1.5 | codeparrot/codeparrot-valid-more-filtering | [
"region:us"
] | 2022-04-27T19:57:16+00:00 | {} | 2022-06-21T16:56:02+00:00 | [] | [] | TAGS
#region-us
| # CodeParrot Dataset Cleaned and filtered (validation)
## Dataset Description
A dataset of Python files from Github. It is a more filtered version of the validation split codeparrot-clean-valid of codeparrot-clean. The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:
- files with a mention of "test file" or "configuration file" or similar in the first 5 lines
- files with high occurence of the keywords "test " or "config"
- files without a mention of the keywords 'def', 'for', 'while' and 'class'
- files that use the assignment operator less than 5 times
- files with ratio between number of characters and number of tokens after tokenization < 1.5 | [
"# CodeParrot Dataset Cleaned and filtered (validation)",
"## Dataset Description\n\nA dataset of Python files from Github. It is a more filtered version of the validation split codeparrot-clean-valid of codeparrot-clean. The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:\n\n- files with a mention of \"test file\" or \"configuration file\" or similar in the first 5 lines\n- files with high occurence of the keywords \"test \" or \"config\" \n- files without a mention of the keywords 'def', 'for', 'while' and 'class'\n- files that use the assignment operator less than 5 times \n- files with ratio between number of characters and number of tokens after tokenization < 1.5"
] | [
"TAGS\n#region-us \n",
"# CodeParrot Dataset Cleaned and filtered (validation)",
"## Dataset Description\n\nA dataset of Python files from Github. It is a more filtered version of the validation split codeparrot-clean-valid of codeparrot-clean. The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:\n\n- files with a mention of \"test file\" or \"configuration file\" or similar in the first 5 lines\n- files with high occurence of the keywords \"test \" or \"config\" \n- files without a mention of the keywords 'def', 'for', 'while' and 'class'\n- files that use the assignment operator less than 5 times \n- files with ratio between number of characters and number of tokens after tokenization < 1.5"
] |
f6e726786732dbdb82a970091bb6d7ffe2251c80 | # CodeParrot 🦜 Dataset Cleaned and filtered (train)
## Dataset Description
A dataset of Python files from Github. It is a more filtered version of the train split [codeparrot-clean-train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-train) of [codeparrot-clean](https://huggingface.co/datasets/codeparrot/codeparrot-clean#codeparrot-%F0%9F%A6%9C-dataset-cleaned). The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:
- files with a mention of "test file" or "configuration file" or similar in the first 5 lines
- files with high occurence of the keywords "test " or "config"
- files without a mention of the keywords `def`, `for`, `while` and `class`
- files that use the assignment operator ```=``` less than 5 times
- files with ratio between number of characters and number of tokens after tokenization < 1.5
| codeparrot/codeparrot-train-more-filtering | [
"region:us"
] | 2022-04-27T21:28:47+00:00 | {} | 2022-06-21T16:54:51+00:00 | [] | [] | TAGS
#region-us
| # CodeParrot Dataset Cleaned and filtered (train)
## Dataset Description
A dataset of Python files from Github. It is a more filtered version of the train split codeparrot-clean-train of codeparrot-clean. The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:
- files with a mention of "test file" or "configuration file" or similar in the first 5 lines
- files with high occurence of the keywords "test " or "config"
- files without a mention of the keywords 'def', 'for', 'while' and 'class'
- files that use the assignment operator less than 5 times
- files with ratio between number of characters and number of tokens after tokenization < 1.5
| [
"# CodeParrot Dataset Cleaned and filtered (train)",
"## Dataset Description\n\nA dataset of Python files from Github. It is a more filtered version of the train split codeparrot-clean-train of codeparrot-clean. The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:\n\n- files with a mention of \"test file\" or \"configuration file\" or similar in the first 5 lines\n- files with high occurence of the keywords \"test \" or \"config\" \n- files without a mention of the keywords 'def', 'for', 'while' and 'class'\n- files that use the assignment operator less than 5 times \n- files with ratio between number of characters and number of tokens after tokenization < 1.5"
] | [
"TAGS\n#region-us \n",
"# CodeParrot Dataset Cleaned and filtered (train)",
"## Dataset Description\n\nA dataset of Python files from Github. It is a more filtered version of the train split codeparrot-clean-train of codeparrot-clean. The additional filters aim at detecting configuration and test files, as well as outlier files that are unlikely to help the model learn code. The first three filters are applied with a probability of 0.7:\n\n- files with a mention of \"test file\" or \"configuration file\" or similar in the first 5 lines\n- files with high occurence of the keywords \"test \" or \"config\" \n- files without a mention of the keywords 'def', 'for', 'while' and 'class'\n- files that use the assignment operator less than 5 times \n- files with ratio between number of characters and number of tokens after tokenization < 1.5"
] |
c8e9269cd96a600bb340ee83ae45f46a02787a12 | # Dataset Card for Spanish IMDb Synopsis
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
## Dataset Description
4969 movie synopsis from IMDb in spanish.
### Dataset Summary
[N/A]
### Languages
All descriptions are in spanish, the other fields have some mix of spanish and english.
## Dataset Structure
[N/A]
### Data Fields
- `description`: IMDb description for the movie (string), should be spanish
- `keywords`: IMDb keywords for the movie (string), mix of spanish and english
- `genre`: The genres of the movie (string), mix of spanish and english
- `year`: The year the movie was published (float)
- `name`: The name of the movie (string), mix of spanish and english
- `director`: The name of the main director in the movie, can be empty (string)
## Dataset Creation
[This kaggle dataset](https://www.kaggle.com/datasets/komalkhetlani/imdb-dataset) was used as a starting point. Then IMDb was scraped downloading the synopsis of the movies that have more than 5000 votes/reviews and those that did not have a synopsis available in Spanish were discarded. | mathigatti/spanish_imdb_synopsis | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"language:es",
"license:apache-2.0",
"region:us"
] | 2022-04-27T23:54:42+00:00 | {"annotations_creators": ["no-annotation"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "task_categories": ["summarization", "text-generation", "text2text-generation"]} | 2022-10-25T09:12:53+00:00 | [] | [
"es"
] | TAGS
#task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #annotations_creators-no-annotation #multilinguality-monolingual #language-Spanish #license-apache-2.0 #region-us
| # Dataset Card for Spanish IMDb Synopsis
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Fields
- Dataset Creation
## Dataset Description
4969 movie synopsis from IMDb in spanish.
### Dataset Summary
[N/A]
### Languages
All descriptions are in spanish, the other fields have some mix of spanish and english.
## Dataset Structure
[N/A]
### Data Fields
- 'description': IMDb description for the movie (string), should be spanish
- 'keywords': IMDb keywords for the movie (string), mix of spanish and english
- 'genre': The genres of the movie (string), mix of spanish and english
- 'year': The year the movie was published (float)
- 'name': The name of the movie (string), mix of spanish and english
- 'director': The name of the main director in the movie, can be empty (string)
## Dataset Creation
This kaggle dataset was used as a starting point. Then IMDb was scraped downloading the synopsis of the movies that have more than 5000 votes/reviews and those that did not have a synopsis available in Spanish were discarded. | [
"# Dataset Card for Spanish IMDb Synopsis",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n- Dataset Creation",
"## Dataset Description\n\n4969 movie synopsis from IMDb in spanish.",
"### Dataset Summary\n[N/A]",
"### Languages\n\nAll descriptions are in spanish, the other fields have some mix of spanish and english.",
"## Dataset Structure\n[N/A]",
"### Data Fields\n- 'description': IMDb description for the movie (string), should be spanish\n- 'keywords': IMDb keywords for the movie (string), mix of spanish and english\n- 'genre': The genres of the movie (string), mix of spanish and english\n- 'year': The year the movie was published (float)\n- 'name': The name of the movie (string), mix of spanish and english\n- 'director': The name of the main director in the movie, can be empty (string)",
"## Dataset Creation\n\nThis kaggle dataset was used as a starting point. Then IMDb was scraped downloading the synopsis of the movies that have more than 5000 votes/reviews and those that did not have a synopsis available in Spanish were discarded."
] | [
"TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #annotations_creators-no-annotation #multilinguality-monolingual #language-Spanish #license-apache-2.0 #region-us \n",
"# Dataset Card for Spanish IMDb Synopsis",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n- Dataset Creation",
"## Dataset Description\n\n4969 movie synopsis from IMDb in spanish.",
"### Dataset Summary\n[N/A]",
"### Languages\n\nAll descriptions are in spanish, the other fields have some mix of spanish and english.",
"## Dataset Structure\n[N/A]",
"### Data Fields\n- 'description': IMDb description for the movie (string), should be spanish\n- 'keywords': IMDb keywords for the movie (string), mix of spanish and english\n- 'genre': The genres of the movie (string), mix of spanish and english\n- 'year': The year the movie was published (float)\n- 'name': The name of the movie (string), mix of spanish and english\n- 'director': The name of the main director in the movie, can be empty (string)",
"## Dataset Creation\n\nThis kaggle dataset was used as a starting point. Then IMDb was scraped downloading the synopsis of the movies that have more than 5000 votes/reviews and those that did not have a synopsis available in Spanish were discarded."
] |
8ef331f75ebced42eceb7c5f53c47a3dafd8ef20 | hello
| TalTechNLP/VoxLingua107 | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-04-28T06:34:03+00:00 | {"license": "cc-by-nc-4.0"} | 2022-05-05T09:52:37+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
| hello
| [] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n"
] |
5d01e3f6a661d48e127ab5d7e3aaa0dc8331438a |
# Dataset Card for DanFEVER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/danfever](https://github.com/StrombergNLP/danfever)
- **Repository:** [https://stromberg.ai/publication/danfever/](https://stromberg.ai/publication/danfever/)
- **Paper:** [https://aclanthology.org/2021.nodalida-main.47/](https://aclanthology.org/2021.nodalida-main.47/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Leon Derczynski](mailto:[email protected])
- **Size of downloaded dataset files:** 2.82 MiB
- **Size of the generated dataset:** 2.80 MiB
- **Total amount of disk used:** 5.62 MiB
### Dataset Summary
We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.
### Supported Tasks and Leaderboards
This dataset supports the FEVER task, but in Danish.
* PwC leaderboard: [Fact Verification on DanFEVER](https://paperswithcode.com/sota/fact-verification-on-danfever)
### Languages
This dataset is in Danish; the bcp47 is `da_DK`.
## Dataset Structure
### Data Instances
```
{
'id': '0',
'claim': 'Den 31. oktober 1920 opdagede Walter Baade kometen (944) Hidalgo i det ydre solsystem.',
'label': 0,
'evidence_extract': '(944) Hidalgo (oprindeligt midlertidigt navn: 1920 HZ) er en mørk småplanet med en diameter på ca. 50 km, der befinder sig i det ydre solsystem. Objektet blev opdaget den 31. oktober 1920 af Walter Baade. En asteroide (småplanet, planetoide) er et fast himmellegeme, hvis bane går rundt om Solen (eller en anden stjerne). Pr. 5. maj 2017 kendes mere end 729.626 asteroider og de fleste befinder sig i asteroidebæltet mellem Mars og Jupiter.',
'verifiable': 1,
'evidence': 'wiki_26366, wiki_12289',
'original_id': '1'
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
The source language is from Wikipedia contributors editors and from dictionary contributors and editors.
### Annotations
#### Annotation process
Detailed in [this paper](http://www.derczynski.com/papers/danfever.pdf).
#### Who are the annotators?
The annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.
### Discussion of Biases
The data is drawn from relatively formal topics, and so may perform poorly outside these areas.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.
### Citation Information
Refer to this work as:
> Nørregaard and Derczynski (2021). "DanFEVER: claim verification dataset for Danish", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).
Bibliographic reference:
````
@inproceedings{norregaard-derczynski-2021-danfever,
title = "{D}an{FEVER}: claim verification dataset for {D}anish",
author = "N{\o}rregaard, Jeppe and Derczynski, Leon",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
year = "2021",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.47",
pages = "422--428"
}
```
| strombergnlp/danfever | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"knowledge-verification",
"region:us"
] | 2022-04-28T08:17:29+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["da"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking", "natural-language-inference"], "paperswithcode_id": "danfever", "pretty_name": "DanFEVER", "tags": ["knowledge-verification"]} | 2022-10-25T20:42:40+00:00 | [] | [
"da"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-cc-by-4.0 #knowledge-verification #region-us
|
# Dataset Card for DanFEVER
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact: Leon Derczynski
- Size of downloaded dataset files: 2.82 MiB
- Size of the generated dataset: 2.80 MiB
- Total amount of disk used: 5.62 MiB
### Dataset Summary
We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.
### Supported Tasks and Leaderboards
This dataset supports the FEVER task, but in Danish.
* PwC leaderboard: Fact Verification on DanFEVER
### Languages
This dataset is in Danish; the bcp47 is 'da_DK'.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
The source language is from Wikipedia contributors editors and from dictionary contributors and editors.
### Annotations
#### Annotation process
Detailed in this paper.
#### Who are the annotators?
The annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.
### Discussion of Biases
The data is drawn from relatively formal topics, and so may perform poorly outside these areas.
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
The data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.
Refer to this work as:
> Nørregaard and Derczynski (2021). "DanFEVER: claim verification dataset for Danish", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).
Bibliographic reference:
| [
"# Dataset Card for DanFEVER",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Leon Derczynski\n- Size of downloaded dataset files: 2.82 MiB\n- Size of the generated dataset: 2.80 MiB\n- Total amount of disk used: 5.62 MiB",
"### Dataset Summary\n\nWe present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.",
"### Supported Tasks and Leaderboards\n\nThis dataset supports the FEVER task, but in Danish.\n\n* PwC leaderboard: Fact Verification on DanFEVER",
"### Languages\n\nThis dataset is in Danish; the bcp47 is 'da_DK'.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nA dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nThe source language is from Wikipedia contributors editors and from dictionary contributors and editors.",
"### Annotations",
"#### Annotation process\n\nDetailed in this paper.",
"#### Who are the annotators?\n\nThe annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.",
"### Discussion of Biases\n\nThe data is drawn from relatively formal topics, and so may perform poorly outside these areas.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.\n\n\n\nRefer to this work as:\n\n> Nørregaard and Derczynski (2021). \"DanFEVER: claim verification dataset for Danish\", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).\n\nBibliographic reference:"
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-cc-by-4.0 #knowledge-verification #region-us \n",
"# Dataset Card for DanFEVER",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Leon Derczynski\n- Size of downloaded dataset files: 2.82 MiB\n- Size of the generated dataset: 2.80 MiB\n- Total amount of disk used: 5.62 MiB",
"### Dataset Summary\n\nWe present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.",
"### Supported Tasks and Leaderboards\n\nThis dataset supports the FEVER task, but in Danish.\n\n* PwC leaderboard: Fact Verification on DanFEVER",
"### Languages\n\nThis dataset is in Danish; the bcp47 is 'da_DK'.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nA dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nThe source language is from Wikipedia contributors editors and from dictionary contributors and editors.",
"### Annotations",
"#### Annotation process\n\nDetailed in this paper.",
"#### Who are the annotators?\n\nThe annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.",
"### Discussion of Biases\n\nThe data is drawn from relatively formal topics, and so may perform poorly outside these areas.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.\n\n\n\nRefer to this work as:\n\n> Nørregaard and Derczynski (2021). \"DanFEVER: claim verification dataset for Danish\", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).\n\nBibliographic reference:"
] |
d766cb8a7497d0d507d81f5f681a8d58deedf495 |
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111)
- **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en`
## Dataset Structure
### Data Instances
Feature |Count
---|---:
Documents |9 551
Tokens |165 739
Person entities |5 271
Location entities |3 114
Organization entities |3 732
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
- `id`: a `string` feature.
- `tokens`: a `list` of `strings`
- `ner_tags`: a `list` of class IDs (`int`s) representing the NER class:
```
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
```
### Data Splits
Section|Region|Collection period|Description|Annotators|Tweet count
---|---|---|---|---|---:
A | UK| 2012.01| General collection |Expert| 1000
B |UK |2012.01-02 |Non-directed tweets |Expert |2000
E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200
F |Stratified |2009-2014| Twitterati |Crowd & expert |2000
G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351
H |Non-UK| 2014 |General collection |Crowd & expert |2000
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
**Test**: Section F
**Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
**Training**: everything else
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| strombergnlp/broad_twitter_corpus | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-28T08:58:09+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "broad-twitter-corpus", "pretty_name": "Broad Twitter Corpus"} | 2022-07-01T14:46:36+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
| Dataset Card for broad\_twitter\_corpus
=======================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard: Named Entity Recognition on Broad Twitter Corpus
* Point of Contact: Leon Derczynski
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, Broad Twitter Corpus: A Diverse Named Entity Recognition Resource, for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: Named Entity Recognition on Broad Twitter Corpus
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; 'bcp47:en'
Dataset Structure
-----------------
### Data Instances
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
* 'id': a 'string' feature.
* 'tokens': a 'list' of 'strings'
* 'ner\_tags': a 'list' of class IDs ('int's) representing the NER class:
### Data Splits
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
Test: Section F
Development: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
Training: everything else
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Contributions
Author-added dataset @leondz
| [
"### Dataset Summary\n\n\nThis is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.\n\n\nSee the paper, Broad Twitter Corpus: A Diverse Named Entity Recognition Resource, for details.",
"### Supported Tasks and Leaderboards\n\n\n* Named Entity Recognition\n* On PWC: Named Entity Recognition on Broad Twitter Corpus",
"### Languages\n\n\nEnglish from UK, US, Australia, Canada, Ireland, New Zealand; 'bcp47:en'\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\nEach tweet contains an ID, a list of tokens, and a list of NER tags\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'strings'\n* 'ner\\_tags': a 'list' of class IDs ('int's) representing the NER class:",
"### Data Splits\n\n\n\nThe most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.\n\n\nTest: Section F\n\n\nDevelopment: Section H (the paper says \"second half of Section H\" but ordinality could be ambiguous, so it all goes in. Bonne chance)\n\n\nTraining: everything else\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International (CC BY 4.0)",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.\n\n\nSee the paper, Broad Twitter Corpus: A Diverse Named Entity Recognition Resource, for details.",
"### Supported Tasks and Leaderboards\n\n\n* Named Entity Recognition\n* On PWC: Named Entity Recognition on Broad Twitter Corpus",
"### Languages\n\n\nEnglish from UK, US, Australia, Canada, Ireland, New Zealand; 'bcp47:en'\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\nEach tweet contains an ID, a list of tokens, and a list of NER tags\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'strings'\n* 'ner\\_tags': a 'list' of class IDs ('int's) representing the NER class:",
"### Data Splits\n\n\n\nThe most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.\n\n\nTest: Section F\n\n\nDevelopment: Section H (the paper says \"second half of Section H\" but ordinality could be ambiguous, so it all goes in. Bonne chance)\n\n\nTraining: everything else\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International (CC BY 4.0)",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] |
cc150b1a28983f4796ab486f6e1ef1d1047e523a |
# Dataset Card for "ipm-nel"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [http://www.derczynski.com/papers/ner_single.pdf](http://www.derczynski.com/papers/ner_single.pdf)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 120 KB
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises
the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities
and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface
forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris,
France vs. Paris, Texas).
The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical
artist, person, product, sports team, TV show, and other.
The file is tab separated, in CoNLL format, with line breaks between tweets.
* Data preserves the tokenisation used in the Ritter datasets.
* PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.
* In cases where a URI could not be agreed, or was not present in DBpedia, the linking URI is `NIL`.
See the paper, [Analysis of Named Entity Recognition and Linking for Tweets](http://www.derczynski.com/papers/ner_single.pdf) for a full description of the methodology.
### Supported Tasks and Leaderboards
* Dataset leaderboard on PWC: [Entity Linking on Derczynski](https://paperswithcode.com/sota/entity-linking-on-derczynski-1)
### Languages
English of unknown region (`bcp47:en`)
## Dataset Structure
### Data Instances
#### ipm_nel
- **Size of downloaded dataset files:** 120 KB
- **Size of the generated dataset:**
- **Total amount of disk used:**
An example of 'train' looks as follows.
```
{
'id': '0',
'tokens': ['#Astros', 'lineup', 'for', 'tonight', '.', 'Keppinger', 'sits', ',', 'Downs', 'plays', '2B', ',', 'CJ', 'bats', '5th', '.', '@alysonfooter', 'http://bit.ly/bHvgCS'],
'ner_tags': [9, 0, 0, 0, 0, 7, 0, 0, 7, 0, 0, 0, 7, 0, 0, 0, 0, 0],
'uris': "['http://dbpedia.org/resource/Houston_Astros', '', '', '', '', 'http://dbpedia.org/resource/Jeff_Keppinger', '', '', 'http://dbpedia.org/resource/Brodie_Downs', '', '', '', 'NIL', '', '', '', '', '']"
}
```
### Data Fields
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
- `uris`: a `list` of URIs (`string`) that disambiguate entities. Set to `NIL` when an entity has no DBpedia entry, or blank for outside-of-entity tokens.
### Data Splits
| name |train|
|---------|----:|
|ipm_nel|183 sentences|
## Dataset Creation
### Curation Rationale
To gather a social media benchmark for named entity linking that is sufficiently different from newswire data.
### Source Data
#### Initial Data Collection and Normalization
The data is partly harvested from that distributed by [Ritter / Named Entity Recognition in Tweets: An Experimental Study](https://aclanthology.org/D11-1141/),
and partly taken from Twitter by the authors.
#### Who are the source language producers?
English-speaking Twitter users, between October 2011 and September 2013
### Annotations
#### Annotation process
The authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find
the best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers.
The annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained
therein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the
tweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were
shown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia
URI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the
volunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text
is not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.
#### Who are the annotators?
The annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text [C16-1111](https://aclanthology.org/C16-1111/).
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must
acknowledge the author if you use this data, but apart from that, you're quite
free to do most things. See https://creativecommons.org/licenses/by/4.0/legalcode .
### Citation Information
```
@article{derczynski2015analysis,
title={Analysis of named entity recognition and linking for tweets},
author={Derczynski, Leon and Maynard, Diana and Rizzo, Giuseppe and Van Erp, Marieke and Gorrell, Genevieve and Troncy, Rapha{\"e}l and Petrak, Johann and Bontcheva, Kalina},
journal={Information Processing \& Management},
volume={51},
number={2},
pages={32--49},
year={2015},
publisher={Elsevier}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| strombergnlp/ipm_nel | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"named-entity-linking",
"region:us"
] | 2022-04-28T09:06:10+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "ipm-nel", "pretty_name": "IPM NEL (Derczynski)", "tags": ["named-entity-linking"]} | 2022-10-25T20:41:26+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #named-entity-linking #region-us
| Dataset Card for "ipm-nel"
==========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper: URL
* Point of Contact: Leon Derczynski
* Size of downloaded dataset files: 120 KB
* Size of the generated dataset:
* Total amount of disk used:
### Dataset Summary
This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises
the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities
and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface
forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris,
France vs. Paris, Texas).
The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical
artist, person, product, sports team, TV show, and other.
The file is tab separated, in CoNLL format, with line breaks between tweets.
* Data preserves the tokenisation used in the Ritter datasets.
* PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.
* In cases where a URI could not be agreed, or was not present in DBpedia, the linking URI is 'NIL'.
See the paper, Analysis of Named Entity Recognition and Linking for Tweets for a full description of the methodology.
### Supported Tasks and Leaderboards
* Dataset leaderboard on PWC: Entity Linking on Derczynski
### Languages
English of unknown region ('bcp47:en')
Dataset Structure
-----------------
### Data Instances
#### ipm\_nel
* Size of downloaded dataset files: 120 KB
* Size of the generated dataset:
* Total amount of disk used:
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'string' feature.
* 'tokens': a 'list' of 'string' features.
* 'ner\_tags': a 'list' of classification labels ('int'). Full tagset with indices:
* 'uris': a 'list' of URIs ('string') that disambiguate entities. Set to 'NIL' when an entity has no DBpedia entry, or blank for outside-of-entity tokens.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
To gather a social media benchmark for named entity linking that is sufficiently different from newswire data.
### Source Data
#### Initial Data Collection and Normalization
The data is partly harvested from that distributed by Ritter / Named Entity Recognition in Tweets: An Experimental Study,
and partly taken from Twitter by the authors.
#### Who are the source language producers?
English-speaking Twitter users, between October 2011 and September 2013
### Annotations
#### Annotation process
The authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find
the best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers.
The annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained
therein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the
tweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were
shown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia
URI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the
volunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text
is not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.
#### Who are the annotators?
The annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text C16-1111.
### Other Known Limitations
The above limitations apply.
Additional Information
----------------------
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must
acknowledge the author if you use this data, but apart from that, you're quite
free to do most things. See URL .
### Contributions
Author-added dataset @leondz
| [
"### Dataset Summary\n\n\nThis data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises\nthe addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities\nand then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface\nforms; for example, this means linking \"Paris\" to the correct instance of a city named that (e.g. Paris,\nFrance vs. Paris, Texas).\n\n\nThe data concentrates on ten types of named entities: company, facility, geographic location, movie, musical\nartist, person, product, sports team, TV show, and other.\n\n\nThe file is tab separated, in CoNLL format, with line breaks between tweets.\n\n\n* Data preserves the tokenisation used in the Ritter datasets.\n* PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.\n* In cases where a URI could not be agreed, or was not present in DBpedia, the linking URI is 'NIL'.\n\n\nSee the paper, Analysis of Named Entity Recognition and Linking for Tweets for a full description of the methodology.",
"### Supported Tasks and Leaderboards\n\n\n* Dataset leaderboard on PWC: Entity Linking on Derczynski",
"### Languages\n\n\nEnglish of unknown region ('bcp47:en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ipm\\_nel\n\n\n* Size of downloaded dataset files: 120 KB\n* Size of the generated dataset:\n* Total amount of disk used:\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:\n* 'uris': a 'list' of URIs ('string') that disambiguate entities. Set to 'NIL' when an entity has no DBpedia entry, or blank for outside-of-entity tokens.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo gather a social media benchmark for named entity linking that is sufficiently different from newswire data.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is partly harvested from that distributed by Ritter / Named Entity Recognition in Tweets: An Experimental Study,\nand partly taken from Twitter by the authors.",
"#### Who are the source language producers?\n\n\nEnglish-speaking Twitter users, between October 2011 and September 2013",
"### Annotations",
"#### Annotation process\n\n\nThe authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find\nthe best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers.\nThe annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained\ntherein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the\ntweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were\nshown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia\nURI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the\nvolunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text\nis not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.",
"#### Who are the annotators?\n\n\nThe annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. User names are preserved.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThere's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.",
"### Discussion of Biases\n\n\nThe data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text C16-1111.",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must\nacknowledge the author if you use this data, but apart from that, you're quite\nfree to do most things. See URL .",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #named-entity-linking #region-us \n",
"### Dataset Summary\n\n\nThis data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises\nthe addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities\nand then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface\nforms; for example, this means linking \"Paris\" to the correct instance of a city named that (e.g. Paris,\nFrance vs. Paris, Texas).\n\n\nThe data concentrates on ten types of named entities: company, facility, geographic location, movie, musical\nartist, person, product, sports team, TV show, and other.\n\n\nThe file is tab separated, in CoNLL format, with line breaks between tweets.\n\n\n* Data preserves the tokenisation used in the Ritter datasets.\n* PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.\n* In cases where a URI could not be agreed, or was not present in DBpedia, the linking URI is 'NIL'.\n\n\nSee the paper, Analysis of Named Entity Recognition and Linking for Tweets for a full description of the methodology.",
"### Supported Tasks and Leaderboards\n\n\n* Dataset leaderboard on PWC: Entity Linking on Derczynski",
"### Languages\n\n\nEnglish of unknown region ('bcp47:en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ipm\\_nel\n\n\n* Size of downloaded dataset files: 120 KB\n* Size of the generated dataset:\n* Total amount of disk used:\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:\n* 'uris': a 'list' of URIs ('string') that disambiguate entities. Set to 'NIL' when an entity has no DBpedia entry, or blank for outside-of-entity tokens.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo gather a social media benchmark for named entity linking that is sufficiently different from newswire data.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is partly harvested from that distributed by Ritter / Named Entity Recognition in Tweets: An Experimental Study,\nand partly taken from Twitter by the authors.",
"#### Who are the source language producers?\n\n\nEnglish-speaking Twitter users, between October 2011 and September 2013",
"### Annotations",
"#### Annotation process\n\n\nThe authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find\nthe best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers.\nThe annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained\ntherein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the\ntweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were\nshown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia\nURI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the\nvolunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text\nis not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.",
"#### Who are the annotators?\n\n\nThe annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. User names are preserved.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThere's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.",
"### Discussion of Biases\n\n\nThe data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text C16-1111.",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must\nacknowledge the author if you use this data, but apart from that, you're quite\nfree to do most things. See URL .",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] |
5e511fb803405e3b2454c77ba8928fe47557017e |
# Dataset Card for "shaj"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1](https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1)
- **Paper:** [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:** 1.85 MiB
### Dataset Summary
This is an abusive/offensive language detection dataset for Albanian. The data is formatted
following the OffensEval convention, with three tasks:
* Subtask A: Offensive (OFF) or not (NOT)
* Subtask B: Untargeted (UNT) or targeted insult (TIN)
* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
Notes on the above:
* The subtask A field should always be filled.
* The subtask B field should only be filled if there's "offensive" (OFF) in A.
* The subtask C field should only be filled if there's "targeted" (TIN) in B.
The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.
### Supported Tasks and Leaderboards
* Task A leaderboard at [paperswithcode.com/sota/hate-speech-detection-on-shaj](https://paperswithcode.com/sota/hate-speech-detection-on-shaj)
### Languages
Albanian (`bcp47:sq-AL`)
## Dataset Structure
### Data Instances
#### shaj
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:** 1.85 MiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'PLACEHOLDER TEXT',
'subtask_a': 1,
'subtask_b': 0,
'subtask_c': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: OFF, 1: NOT`
- `subtask_b`: whether an offensive instance is a targeted insult; `0: TIN, 1: UNT, 2: not applicable`
- `subtask_c`: what a targeted insult is aimed at; `0: IND, 1: GRP, 2: OTH, 3: not applicable`
### Data Splits
| name |train|
|---------|----:|
|shaj|11874 sentences|
## Dataset Creation
### Curation Rationale
Collecting data for enabling offensive speech detection in Albanian
### Source Data
#### Initial Data Collection and Normalization
The text is scraped from comments on popular Albanian YouTube and Instagram accounts.
An extended discussion is given in the paper in section 3.2.
#### Who are the source language producers?
People who comment on a selection of high-activity Albanian instagram and youtube profiles.
### Annotations
#### Annotation process
The annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.
#### Who are the annotators?
Albanian native speakers, male and female, aged 20-60.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@article{nurce2021detecting,
title={Detecting Abusive Albanian},
author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
journal={arXiv preprint arXiv:2107.13592},
year={2021}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| strombergnlp/shaj | [
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"arxiv:2107.13592",
"doi:10.57967/hf/0514",
"region:us"
] | 2022-04-28T09:07:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text_classification"], "task_ids": ["hate-speech-detection", "text-classification-other-hate-speech-detection"], "paperswithcode_id": "shaj", "pretty_name": "SHAJ", "languages": ["sq", "sq-AL"], "licenses": ["cc-by-4.0"], "extra_gated_prompt": "Warning: this repository contains harmful content (abusive language, hate speech)."} | 2022-06-14T13:03:37+00:00 | [
"2107.13592"
] | [] | TAGS
#task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #arxiv-2107.13592 #doi-10.57967/hf/0514 #region-us
| Dataset Card for "shaj"
=======================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository: URL
* Paper: URL
* Point of Contact: Leon Derczynski
* Size of downloaded dataset files: 769.21 KiB
* Size of the generated dataset: 1.06 MiB
* Total amount of disk used: 1.85 MiB
### Dataset Summary
This is an abusive/offensive language detection dataset for Albanian. The data is formatted
following the OffensEval convention, with three tasks:
* Subtask A: Offensive (OFF) or not (NOT)
* Subtask B: Untargeted (UNT) or targeted insult (TIN)
* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
Notes on the above:
* The subtask A field should always be filled.
* The subtask B field should only be filled if there's "offensive" (OFF) in A.
* The subtask C field should only be filled if there's "targeted" (TIN) in B.
The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
See the paper URL for full details.
### Supported Tasks and Leaderboards
* Task A leaderboard at URL
### Languages
Albanian ('bcp47:sq-AL')
Dataset Structure
-----------------
### Data Instances
#### shaj
* Size of downloaded dataset files: 769.21 KiB
* Size of the generated dataset: 1.06 MiB
* Total amount of disk used: 1.85 MiB
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'string' feature.
* 'text': a 'string'.
* 'subtask\_a': whether or not the instance is offensive; '0: OFF, 1: NOT'
* 'subtask\_b': whether an offensive instance is a targeted insult; '0: TIN, 1: UNT, 2: not applicable'
* 'subtask\_c': what a targeted insult is aimed at; '0: IND, 1: GRP, 2: OTH, 3: not applicable'
### Data Splits
Dataset Creation
----------------
### Curation Rationale
Collecting data for enabling offensive speech detection in Albanian
### Source Data
#### Initial Data Collection and Normalization
The text is scraped from comments on popular Albanian YouTube and Instagram accounts.
An extended discussion is given in the paper in section 3.2.
#### Who are the source language producers?
People who comment on a selection of high-activity Albanian instagram and youtube profiles.
### Annotations
#### Annotation process
The annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.
#### Who are the annotators?
Albanian native speakers, male and female, aged 20-60.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The data definitely contains abusive language.
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Contributions
Author-added dataset @leondz
| [
"### Dataset Summary\n\n\nThis is an abusive/offensive language detection dataset for Albanian. The data is formatted\nfollowing the OffensEval convention, with three tasks:\n\n\n* Subtask A: Offensive (OFF) or not (NOT)\n* Subtask B: Untargeted (UNT) or targeted insult (TIN)\n* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)\n\n\nNotes on the above:\n\n\n* The subtask A field should always be filled.\n* The subtask B field should only be filled if there's \"offensive\" (OFF) in A.\n* The subtask C field should only be filled if there's \"targeted\" (TIN) in B.\n\n\nThe dataset name is a backronym, also standing for \"Spoken Hate in the Albanian Jargon\"\n\n\nSee the paper URL for full details.",
"### Supported Tasks and Leaderboards\n\n\n* Task A leaderboard at URL",
"### Languages\n\n\nAlbanian ('bcp47:sq-AL')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### shaj\n\n\n* Size of downloaded dataset files: 769.21 KiB\n* Size of the generated dataset: 1.06 MiB\n* Total amount of disk used: 1.85 MiB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'text': a 'string'.\n* 'subtask\\_a': whether or not the instance is offensive; '0: OFF, 1: NOT'\n* 'subtask\\_b': whether an offensive instance is a targeted insult; '0: TIN, 1: UNT, 2: not applicable'\n* 'subtask\\_c': what a targeted insult is aimed at; '0: IND, 1: GRP, 2: OTH, 3: not applicable'",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nCollecting data for enabling offensive speech detection in Albanian",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe text is scraped from comments on popular Albanian YouTube and Instagram accounts.\nAn extended discussion is given in the paper in section 3.2.",
"#### Who are the source language producers?\n\n\nPeople who comment on a selection of high-activity Albanian instagram and youtube profiles.",
"### Annotations",
"#### Annotation process\n\n\nThe annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.",
"#### Who are the annotators?\n\n\nAlbanian native speakers, male and female, aged 20-60.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. No PII removal has been performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe data definitely contains abusive language.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] | [
"TAGS\n#task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #arxiv-2107.13592 #doi-10.57967/hf/0514 #region-us \n",
"### Dataset Summary\n\n\nThis is an abusive/offensive language detection dataset for Albanian. The data is formatted\nfollowing the OffensEval convention, with three tasks:\n\n\n* Subtask A: Offensive (OFF) or not (NOT)\n* Subtask B: Untargeted (UNT) or targeted insult (TIN)\n* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)\n\n\nNotes on the above:\n\n\n* The subtask A field should always be filled.\n* The subtask B field should only be filled if there's \"offensive\" (OFF) in A.\n* The subtask C field should only be filled if there's \"targeted\" (TIN) in B.\n\n\nThe dataset name is a backronym, also standing for \"Spoken Hate in the Albanian Jargon\"\n\n\nSee the paper URL for full details.",
"### Supported Tasks and Leaderboards\n\n\n* Task A leaderboard at URL",
"### Languages\n\n\nAlbanian ('bcp47:sq-AL')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### shaj\n\n\n* Size of downloaded dataset files: 769.21 KiB\n* Size of the generated dataset: 1.06 MiB\n* Total amount of disk used: 1.85 MiB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'text': a 'string'.\n* 'subtask\\_a': whether or not the instance is offensive; '0: OFF, 1: NOT'\n* 'subtask\\_b': whether an offensive instance is a targeted insult; '0: TIN, 1: UNT, 2: not applicable'\n* 'subtask\\_c': what a targeted insult is aimed at; '0: IND, 1: GRP, 2: OTH, 3: not applicable'",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nCollecting data for enabling offensive speech detection in Albanian",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe text is scraped from comments on popular Albanian YouTube and Instagram accounts.\nAn extended discussion is given in the paper in section 3.2.",
"#### Who are the source language producers?\n\n\nPeople who comment on a selection of high-activity Albanian instagram and youtube profiles.",
"### Annotations",
"#### Annotation process\n\n\nThe annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.",
"#### Who are the annotators?\n\n\nAlbanian native speakers, male and female, aged 20-60.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. No PII removal has been performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe data definitely contains abusive language.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] |
1075212523ac84e51b0fe6bc41fef4cd2bf695cc |
# Dataset Card for "dkstance / DAST"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/jointrumourstanceandveracity/](https://stromberg.ai/publication/jointrumourstanceandveracity/)
- **Repository:** [https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137](https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137)
- **Paper:** [https://aclanthology.org/W19-6122/](https://aclanthology.org/W19-6122/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.
The dataset is applicable for supervised stance classification and rumour veracity prediction.
### Supported Tasks and Leaderboards
* Stance prediction
### Languages
## Dataset Structure
### Data Instances
#### DAST / dkstance
- **Size of downloaded dataset files:** 4.72 MiB
- **Size of the generated dataset:** 3.69 MiB
- **Total amount of disk used:** 8.41 MiB
An example of 'train' looks as follows.
```
{
'id': '1',
'native_id': 'ebwjq5z',
'text': 'Med de udfordringer som daginstitutionerne har med normeringer, og økonomi i det hele taget, synes jeg det er en vanvittig beslutning at prioritere skattebetalt vegansk kost i daginstitutionerne. Brug dog pengene på noget mere personale, og lad folk selv betale for deres individuelle kostønsker.',
'parent_id': 'a6o3us',
'parent_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'parent_stance': 0,
'source_id': 'a6o3us',
'source_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'source_stance': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `native_id`: a `string` feature representing the native ID of the entry.
- `text`: a `string` of the comment text in which stance is annotated.
- `parent_id`: the `native_id` of this comment's parent.
- `parent_text`: a `string` of the parent comment's text.
- `parent_stance`: the label of the stance in the comment towards its parent comment.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
- `source_id`: the `native_id` of this comment's source / post.
- `source_text`: a `string` of the source / post text.
- `source_stance`: the label of the stance in the comment towards the original source post.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
### Data Splits
| name |size|
|---------|----:|
|train|3122|
|validation|1066|
|test|1060|
These splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.
## Dataset Creation
### Curation Rationale
Comments around rumourous claims to enable rumour and stance analysis in Danish
### Source Data
#### Initial Data Collection and Normalization
The data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.
#### Who are the source language producers?
Danish-speaking Twitter users.
### Annotations
#### Annotation process
There was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.
#### Who are the annotators?
* Age: 20-30.
* Gender: male.
* Race/ethnicity: white northern European.
* Native language: Danish.
* Socioeconomic status: higher education student.
### Personal and Sensitive Information
The data was public at the time of collection. User names are not preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
An NLP data statement is included in the paper describing the work, [https://aclanthology.org/W19-6122.pdf](https://aclanthology.org/W19-6122.pdf)
### Citation Information
```
@inproceedings{lillie-etal-2019-joint,
title = "Joint Rumour Stance and Veracity Prediction",
author = "Lillie, Anders Edelbo and
Middelboe, Emil Refsgaard and
Derczynski, Leon",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6122",
pages = "208--221",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| strombergnlp/dkstance | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | 2022-04-28T09:07:39+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["da"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "dast", "pretty_name": "DAST", "extra_gated_prompt": "Warning: the data in this repository contains harmful content (misinformative claims).", "tags": ["stance-detection"]} | 2022-10-25T20:45:42+00:00 | [] | [
"da"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-cc-by-4.0 #stance-detection #region-us
| Dataset Card for "dkstance / DAST"
==================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Leon Derczynski
* Size of downloaded dataset files:
* Size of the generated dataset:
* Total amount of disk used:
### Dataset Summary
This is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.
The dataset is applicable for supervised stance classification and rumour veracity prediction.
### Supported Tasks and Leaderboards
* Stance prediction
### Languages
Dataset Structure
-----------------
### Data Instances
#### DAST / dkstance
* Size of downloaded dataset files: 4.72 MiB
* Size of the generated dataset: 3.69 MiB
* Total amount of disk used: 8.41 MiB
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'string' feature.
* 'native\_id': a 'string' feature representing the native ID of the entry.
* 'text': a 'string' of the comment text in which stance is annotated.
* 'parent\_id': the 'native\_id' of this comment's parent.
* 'parent\_text': a 'string' of the parent comment's text.
* 'parent\_stance': the label of the stance in the comment towards its parent comment.
* 'source\_id': the 'native\_id' of this comment's source / post.
* 'source\_text': a 'string' of the source / post text.
* 'source\_stance': the label of the stance in the comment towards the original source post.
### Data Splits
These splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.
Dataset Creation
----------------
### Curation Rationale
Comments around rumourous claims to enable rumour and stance analysis in Danish
### Source Data
#### Initial Data Collection and Normalization
The data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.
#### Who are the source language producers?
Danish-speaking Twitter users.
### Annotations
#### Annotation process
There was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.
#### Who are the annotators?
* Age: 20-30.
* Gender: male.
* Race/ethnicity: white northern European.
* Native language: Danish.
* Socioeconomic status: higher education student.
### Personal and Sensitive Information
The data was public at the time of collection. User names are not preserved.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.
### Other Known Limitations
The above limitations apply.
Additional Information
----------------------
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
An NLP data statement is included in the paper describing the work, URL
### Contributions
Author-added dataset @leondz
| [
"### Dataset Summary\n\n\nThis is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.\n\n\nThe dataset is applicable for supervised stance classification and rumour veracity prediction.",
"### Supported Tasks and Leaderboards\n\n\n* Stance prediction",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### DAST / dkstance\n\n\n* Size of downloaded dataset files: 4.72 MiB\n* Size of the generated dataset: 3.69 MiB\n* Total amount of disk used: 8.41 MiB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'native\\_id': a 'string' feature representing the native ID of the entry.\n* 'text': a 'string' of the comment text in which stance is annotated.\n* 'parent\\_id': the 'native\\_id' of this comment's parent.\n* 'parent\\_text': a 'string' of the parent comment's text.\n* 'parent\\_stance': the label of the stance in the comment towards its parent comment.\n* 'source\\_id': the 'native\\_id' of this comment's source / post.\n* 'source\\_text': a 'string' of the source / post text.\n* 'source\\_stance': the label of the stance in the comment towards the original source post.",
"### Data Splits\n\n\n\nThese splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nComments around rumourous claims to enable rumour and stance analysis in Danish",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.",
"#### Who are the source language producers?\n\n\nDanish-speaking Twitter users.",
"### Annotations",
"#### Annotation process\n\n\nThere was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.",
"#### Who are the annotators?\n\n\n* Age: 20-30.\n* Gender: male.\n* Race/ethnicity: white northern European.\n* Native language: Danish.\n* Socioeconomic status: higher education student.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. User names are not preserved.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThere's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.",
"### Discussion of Biases\n\n\nThe source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.\nAn NLP data statement is included in the paper describing the work, URL",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-cc-by-4.0 #stance-detection #region-us \n",
"### Dataset Summary\n\n\nThis is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.\n\n\nThe dataset is applicable for supervised stance classification and rumour veracity prediction.",
"### Supported Tasks and Leaderboards\n\n\n* Stance prediction",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### DAST / dkstance\n\n\n* Size of downloaded dataset files: 4.72 MiB\n* Size of the generated dataset: 3.69 MiB\n* Total amount of disk used: 8.41 MiB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'native\\_id': a 'string' feature representing the native ID of the entry.\n* 'text': a 'string' of the comment text in which stance is annotated.\n* 'parent\\_id': the 'native\\_id' of this comment's parent.\n* 'parent\\_text': a 'string' of the parent comment's text.\n* 'parent\\_stance': the label of the stance in the comment towards its parent comment.\n* 'source\\_id': the 'native\\_id' of this comment's source / post.\n* 'source\\_text': a 'string' of the source / post text.\n* 'source\\_stance': the label of the stance in the comment towards the original source post.",
"### Data Splits\n\n\n\nThese splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nComments around rumourous claims to enable rumour and stance analysis in Danish",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.",
"#### Who are the source language producers?\n\n\nDanish-speaking Twitter users.",
"### Annotations",
"#### Annotation process\n\n\nThere was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.",
"#### Who are the annotators?\n\n\n* Age: 20-30.\n* Gender: male.\n* Race/ethnicity: white northern European.\n* Native language: Danish.\n* Socioeconomic status: higher education student.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. User names are not preserved.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThere's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.",
"### Discussion of Biases\n\n\nThe source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.\nAn NLP data statement is included in the paper describing the work, URL",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] |
d617a5df1b7ec95b1b290926e2d1f20c55d2c0b9 |
# Dataset Card for "polstance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/politicalstanceindanish/](https://stromberg.ai/publication/politicalstanceindanish/)
- **Repository:** [https://github.com/StrombergNLP/Political-Stance-in-Danish/](https://github.com/StrombergNLP/Political-Stance-in-Danish/)
- **Paper:** [https://aclanthology.org/W19-6121/](https://aclanthology.org/W19-6121/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 548 KB
- **Size of the generated dataset:** 222 KB
- **Total amount of disk used:** 770 KB
### Dataset Summary
Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article.
### Supported Tasks and Leaderboards
*
### Languages
Danish, bcp47: `da-DK`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'topic': 'integration',
'quote': 'Der kunne jeg godt tænke mig, at der stod mere eksplicit, at de (landene, red.) skal bekæmpe menneskesmuglere og tage imod deres egne borgere',
'label': 2,
'quoteID': '516',
'party': 'Det Konservative Folkeparti',
'politician': 'Naser Khader',
}
```
### Data Fields
- `id`: a `string` feature.
- `topic`: a `string` expressing a topic.
- `quote`: a `string` to be classified for its stance to the topic.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "against",
1: "neutral",
2: "for",
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |train|
|---------|----:|
|polstance|900 sentences|
## Dataset Creation
### Curation Rationale
Collection of quotes from politicians to allow detecting how political quotes orient to issues.
### Source Data
#### Initial Data Collection and Normalization
The data is taken from proceedings of the Danish parliament, the Folketing - [ft.dk](https://ft.dk).
#### Who are the source language producers?
Danish polticians
### Annotations
#### Annotation process
Annotators labelled comments for being against, neutral, or for a specified topic
#### Who are the annotators?
Danish native speakers, 20s, male, studying Software Design.
### Personal and Sensitive Information
The data was public at the time of collection and will remain open public record by law in Denmark.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| strombergnlp/polstance | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | 2022-04-28T09:08:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["da"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-analysis"], "paperswithcode_id": "polstance", "pretty_name": "Political Stance for Danish", "tags": ["stance-detection"]} | 2022-10-25T20:42:18+00:00 | [] | [
"da"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-analysis #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Danish #license-cc-by-4.0 #stance-detection #region-us
| Dataset Card for "polstance"
============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Leon Derczynski
* Size of downloaded dataset files: 548 KB
* Size of the generated dataset: 222 KB
* Total amount of disk used: 770 KB
### Dataset Summary
Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article.
### Supported Tasks and Leaderboards
*
### Languages
Danish, bcp47: 'da-DK'
Dataset Structure
-----------------
### Data Instances
#### polstance
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'string' feature.
* 'topic': a 'string' expressing a topic.
* 'quote': a 'string' to be classified for its stance to the topic.
* 'label': a class label representing the stance the text expresses towards the target. Full tagset with indices:
* 'quoteID': a 'string' of the internal quote ID.
* 'party': a 'string' describing the party affiliation of the quote utterer at the time of utterance.
* 'politician': a 'string' naming the politician who uttered the quote.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
Collection of quotes from politicians to allow detecting how political quotes orient to issues.
### Source Data
#### Initial Data Collection and Normalization
The data is taken from proceedings of the Danish parliament, the Folketing - URL.
#### Who are the source language producers?
Danish polticians
### Annotations
#### Annotation process
Annotators labelled comments for being against, neutral, or for a specified topic
#### Who are the annotators?
Danish native speakers, 20s, male, studying Software Design.
### Personal and Sensitive Information
The data was public at the time of collection and will remain open public record by law in Denmark.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
Additional Information
----------------------
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Contributions
Author-added dataset @leondz
| [
"### Dataset Summary\n\n\nPolitical stance in Danish. Examples represent statements by\npoliticians and are annotated for, against, or neutral to a given topic/article.",
"### Supported Tasks and Leaderboards\n\n\n*",
"### Languages\n\n\nDanish, bcp47: 'da-DK'\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### polstance\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'topic': a 'string' expressing a topic.\n* 'quote': a 'string' to be classified for its stance to the topic.\n* 'label': a class label representing the stance the text expresses towards the target. Full tagset with indices:\n* 'quoteID': a 'string' of the internal quote ID.\n* 'party': a 'string' describing the party affiliation of the quote utterer at the time of utterance.\n* 'politician': a 'string' naming the politician who uttered the quote.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nCollection of quotes from politicians to allow detecting how political quotes orient to issues.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is taken from proceedings of the Danish parliament, the Folketing - URL.",
"#### Who are the source language producers?\n\n\nDanish polticians",
"### Annotations",
"#### Annotation process\n\n\nAnnotators labelled comments for being against, neutral, or for a specified topic",
"#### Who are the annotators?\n\n\nDanish native speakers, 20s, male, studying Software Design.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection and will remain open public record by law in Denmark.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-analysis #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Danish #license-cc-by-4.0 #stance-detection #region-us \n",
"### Dataset Summary\n\n\nPolitical stance in Danish. Examples represent statements by\npoliticians and are annotated for, against, or neutral to a given topic/article.",
"### Supported Tasks and Leaderboards\n\n\n*",
"### Languages\n\n\nDanish, bcp47: 'da-DK'\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### polstance\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'topic': a 'string' expressing a topic.\n* 'quote': a 'string' to be classified for its stance to the topic.\n* 'label': a class label representing the stance the text expresses towards the target. Full tagset with indices:\n* 'quoteID': a 'string' of the internal quote ID.\n* 'party': a 'string' describing the party affiliation of the quote utterer at the time of utterance.\n* 'politician': a 'string' naming the politician who uttered the quote.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nCollection of quotes from politicians to allow detecting how political quotes orient to issues.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is taken from proceedings of the Danish parliament, the Folketing - URL.",
"#### Who are the source language producers?\n\n\nDanish polticians",
"### Annotations",
"#### Annotation process\n\n\nAnnotators labelled comments for being against, neutral, or for a specified topic",
"#### Who are the annotators?\n\n\nDanish native speakers, 20s, male, studying Software Design.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection and will remain open public record by law in Denmark.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] |
8ad62edaaf487414e527d3f59edd6f6b52778b41 |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/StrombergNLP/bornholmsk
- **Repository:** https://github.com/StrombergNLP/bornholmsk
- **Paper:** https://aclanthology.org/W19-6138/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This corpus introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian.
Sammenfattnijng på borrijnholmst: Dæjnna artikkelijn introduserer natursprågsresurser å varktoi for borrijnholmst, ed språg a dær snakkes på ön Borrijnholm me rødder i danst å i nær familia me skånst.
For more details, see the paper [Bornholmsk Natural Language Processing: Resources and Tools](https://aclanthology.org/W19-6138/).
### Supported Tasks and Leaderboards
*
### Languages
Bornholmsk, a language variant of Danish spoken on the island of Bornholm. bcp47: `da-bornholm`
## Dataset Structure
### Data Instances
13169 lines, 175 167 words, 801 KB
### Data Fields
`id`: the sentence ID, `int`
`text`: the Bornholmsk text, `string`
### Data Splits
Monolithic
## Dataset Creation
### Curation Rationale
To gather as much digital Bornholmsk together as possible
### Source Data
#### Initial Data Collection and Normalization
From many places - see paper for details. Sources include poems, songs, translations from Danish, folk stories, dictionary entries.
#### Who are the source language producers?
Native speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
No annotations
### Personal and Sensitive Information
Unknown, but low risk of presence, given the source material
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to capture Bornholmsk digitally and provide a way for NLP systems to interact with it, and perhaps even spark interest in dealing with the language.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
@inproceedings{derczynski-kjeldsen-2019-bornholmsk,
title = "Bornholmsk Natural Language Processing: Resources and Tools",
author = "Derczynski, Leon and
Kjeldsen, Alex Speed",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6138",
pages = "338--344",
}
``` | strombergnlp/bornholmsk | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"region:us"
] | 2022-04-28T09:08:37+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["da"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "language_bcp47": ["da", "da-bornholm"]} | 2022-10-25T20:40:56+00:00 | [] | [
"da"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Danish #license-cc-by-4.0 #region-us
|
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact: Leon Derczynski
### Dataset Summary
This corpus introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian.
Sammenfattnijng på borrijnholmst: Dæjnna artikkelijn introduserer natursprågsresurser å varktoi for borrijnholmst, ed språg a dær snakkes på ön Borrijnholm me rødder i danst å i nær familia me skånst.
For more details, see the paper Bornholmsk Natural Language Processing: Resources and Tools.
### Supported Tasks and Leaderboards
*
### Languages
Bornholmsk, a language variant of Danish spoken on the island of Bornholm. bcp47: 'da-bornholm'
## Dataset Structure
### Data Instances
13169 lines, 175 167 words, 801 KB
### Data Fields
'id': the sentence ID, 'int'
'text': the Bornholmsk text, 'string'
### Data Splits
Monolithic
## Dataset Creation
### Curation Rationale
To gather as much digital Bornholmsk together as possible
### Source Data
#### Initial Data Collection and Normalization
From many places - see paper for details. Sources include poems, songs, translations from Danish, folk stories, dictionary entries.
#### Who are the source language producers?
Native speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
No annotations
### Personal and Sensitive Information
Unknown, but low risk of presence, given the source material
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to capture Bornholmsk digitally and provide a way for NLP systems to interact with it, and perhaps even spark interest in dealing with the language.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
This collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen
### Licensing Information
Creative Commons Attribution 4.0
| [
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Leon Derczynski",
"### Dataset Summary\n\nThis corpus introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian. \n\nSammenfattnijng på borrijnholmst: Dæjnna artikkelijn introduserer natursprågsresurser å varktoi for borrijnholmst, ed språg a dær snakkes på ön Borrijnholm me rødder i danst å i nær familia me skånst.\n\nFor more details, see the paper Bornholmsk Natural Language Processing: Resources and Tools.",
"### Supported Tasks and Leaderboards\n\n*",
"### Languages\n\nBornholmsk, a language variant of Danish spoken on the island of Bornholm. bcp47: 'da-bornholm'",
"## Dataset Structure",
"### Data Instances\n\n13169 lines, 175 167 words, 801 KB",
"### Data Fields\n\n'id': the sentence ID, 'int'\n'text': the Bornholmsk text, 'string'",
"### Data Splits\n\nMonolithic",
"## Dataset Creation",
"### Curation Rationale\n\nTo gather as much digital Bornholmsk together as possible",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom many places - see paper for details. Sources include poems, songs, translations from Danish, folk stories, dictionary entries.",
"#### Who are the source language producers?\n\nNative speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.",
"### Annotations",
"#### Annotation process\n\nNo annotations",
"#### Who are the annotators?\n\nNo annotations",
"### Personal and Sensitive Information\n\nUnknown, but low risk of presence, given the source material",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to capture Bornholmsk digitally and provide a way for NLP systems to interact with it, and perhaps even spark interest in dealing with the language.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThis collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen",
"### Licensing Information\n\nCreative Commons Attribution 4.0"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Danish #license-cc-by-4.0 #region-us \n",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Leon Derczynski",
"### Dataset Summary\n\nThis corpus introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian. \n\nSammenfattnijng på borrijnholmst: Dæjnna artikkelijn introduserer natursprågsresurser å varktoi for borrijnholmst, ed språg a dær snakkes på ön Borrijnholm me rødder i danst å i nær familia me skånst.\n\nFor more details, see the paper Bornholmsk Natural Language Processing: Resources and Tools.",
"### Supported Tasks and Leaderboards\n\n*",
"### Languages\n\nBornholmsk, a language variant of Danish spoken on the island of Bornholm. bcp47: 'da-bornholm'",
"## Dataset Structure",
"### Data Instances\n\n13169 lines, 175 167 words, 801 KB",
"### Data Fields\n\n'id': the sentence ID, 'int'\n'text': the Bornholmsk text, 'string'",
"### Data Splits\n\nMonolithic",
"## Dataset Creation",
"### Curation Rationale\n\nTo gather as much digital Bornholmsk together as possible",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom many places - see paper for details. Sources include poems, songs, translations from Danish, folk stories, dictionary entries.",
"#### Who are the source language producers?\n\nNative speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.",
"### Annotations",
"#### Annotation process\n\nNo annotations",
"#### Who are the annotators?\n\nNo annotations",
"### Personal and Sensitive Information\n\nUnknown, but low risk of presence, given the source material",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to capture Bornholmsk digitally and provide a way for NLP systems to interact with it, and perhaps even spark interest in dealing with the language.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThis collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen",
"### Licensing Information\n\nCreative Commons Attribution 4.0"
] |
12ff587afc996106440872be6b3656218fad0e82 |
# Dataset Card for "twitter-pos-vcb"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html)
- **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter)
- **Paper:** [https://aclanthology.org/R13-1026.pdf](https://aclanthology.org/R13-1026.pdf)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 4.51 MiB
- **Size of the generated dataset:** 26.88 MB
- **Total amount of disk used:** 31.39 MB
### Dataset Summary
Part-of-speech information is basic NLP task. However, Twitter text
is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
are completely compatible over a whole tweet, is that tweet added to the dataset.
This data is recommend for use a training data **only**, and not evaluation data.
For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English, non-region-specific. `bcp47:en`
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### twitter_pos_vcb
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
```
### Data Splits
| name |tokens|sentences|
|---------|----:|---------:|
|twitter-pos-vcb|1 543 126| 159 492|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons Attribution 4.0 (CC-BY)
### Citation Information
```
@inproceedings{derczynski2013twitter,
title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
pages={198--206},
year={2013}
}
```
### Contributions
Author uploaded ([@leondz](https://github.com/leondz)) | strombergnlp/twitter_pos_vcb | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-28T09:10:59+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["part-of-speech"], "paperswithcode_id": "twitter-pos-vcb", "pretty_name": "Twitter PoS VCB"} | 2022-10-25T20:42:56+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
| Dataset Card for "twitter-pos-vcb"
==================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Leon Derczynski
* Size of downloaded dataset files: 4.51 MiB
* Size of the generated dataset: 26.88 MB
* Total amount of disk used: 31.39 MB
### Dataset Summary
Part-of-speech information is basic NLP task. However, Twitter text
is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
are completely compatible over a whole tweet, is that tweet added to the dataset.
This data is recommend for use a training data only, and not evaluation data.
For more details see URL and URL
### Supported Tasks and Leaderboards
### Languages
English, non-region-specific. 'bcp47:en'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### twitter\_pos\_vcb
* 'id': a 'string' feature.
* 'tokens': a 'list' of 'string' features.
* 'pos\_tags': a 'list' of classification labels ('int'). Full tagset with indices:
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
Creative Commons Attribution 4.0 (CC-BY)
### Contributions
Author uploaded (@leondz)
| [
"### Dataset Summary\n\n\nPart-of-speech information is basic NLP task. However, Twitter text\nis difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.\nThis data is the vote-constrained bootstrapped data generate to support state-of-the-art results.\n\n\nThe data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.\nThe tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged\njointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs\nare completely compatible over a whole tweet, is that tweet added to the dataset.\n\n\nThis data is recommend for use a training data only, and not evaluation data.\n\n\nFor more details see URL and URL",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish, non-region-specific. 'bcp47:en'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### twitter\\_pos\\_vcb\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 (CC-BY)",
"### Contributions\n\n\nAuthor uploaded (@leondz)"
] | [
"TAGS\n#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nPart-of-speech information is basic NLP task. However, Twitter text\nis difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.\nThis data is the vote-constrained bootstrapped data generate to support state-of-the-art results.\n\n\nThe data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.\nThe tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged\njointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs\nare completely compatible over a whole tweet, is that tweet added to the dataset.\n\n\nThis data is recommend for use a training data only, and not evaluation data.\n\n\nFor more details see URL and URL",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish, non-region-specific. 'bcp47:en'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### twitter\\_pos\\_vcb\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 (CC-BY)",
"### Contributions\n\n\nAuthor uploaded (@leondz)"
] |
9cd7629e8632e2bf2185a7ae2309b8333248d57e |
# Dataset Card for "zulu-stance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://arxiv.org/abs/2205.03153](https://arxiv.org/abs/2205.03153)
- **Repository:**
- **Paper:** [https://arxiv.org/pdf/2205.03153](https://arxiv.org/pdf/2205.03153)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
### Dataset Summary
This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.
Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer.
### Supported Tasks and Leaderboards
*
### Languages
Zulu (`bcp47:zu`)
## Dataset Structure
### Data Instances
#### zulu_stance
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'ubukhulu be-islam buba sobala lapho i-smartphone ifaka i-ramayana njengo-ramadan. #semst',
'target': 'Atheism',
'stance': 1}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string` expressing a stance.
- `target`: a `string` of the target/topic annotated here.
- `stance`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "FAVOR",
1: "AGAINST",
2: "NONE",
```
### Data Splits
| name |train|
|---------|----:|
|zulu_stance|1343 sentences|
## Dataset Creation
### Curation Rationale
To enable stance detection in Zulu and also to measure domain transfer in translation
### Source Data
#### Initial Data Collection and Normalization
The original data is taken from [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/),
and then translated manually to Zulu.
#### Who are the source language producers?
English-speaking Twitter users.
### Annotations
#### Annotation process
See [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/); the annotations are taken from there.
#### Who are the annotators?
See [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/); the annotations are taken from there.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
While the data is in Zulu, the source text is not from or about Zulu-speakers, and so still expresses the social biases and topics found in English-speaking Twitter users. Further, some of the topics are USA-specific. The sentiments and ideas in this dataset do not represent Zulu speakers.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{dlamini_zulu_stance,
title={Bridging the Domain Gap for Stance Detection for the Zulu language},
author={Dlamini, Gcinizwe and Bekkouch, Imad Eddine Ibrahim and Khan, Adil and Derczynski, Leon},
booktitle={Proceedings of IEEE IntelliSys},
year={2022}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| strombergnlp/zulu_stance | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zu",
"license:cc-by-4.0",
"stance-detection",
"arxiv:2205.03153",
"region:us"
] | 2022-04-28T09:24:38+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["zu"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking", "sentiment-classification"], "paperswithcode_id": "zulu-stance", "pretty_name": "ZUstance", "tags": ["stance-detection"]} | 2022-10-25T20:46:14+00:00 | [
"2205.03153"
] | [
"zu"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Zulu #license-cc-by-4.0 #stance-detection #arxiv-2205.03153 #region-us
| Dataset Card for "zulu-stance"
==============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper: URL
* Point of Contact: Leon Derczynski
* Size of downloaded dataset files: 212.54 KiB
* Size of the generated dataset: 186.76 KiB
* Total amount of disk used: 399.30KiB
### Dataset Summary
This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.
Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer.
### Supported Tasks and Leaderboards
*
### Languages
Zulu ('bcp47:zu')
Dataset Structure
-----------------
### Data Instances
#### zulu\_stance
* Size of downloaded dataset files: 212.54 KiB
* Size of the generated dataset: 186.76 KiB
* Total amount of disk used: 399.30KiB
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'string' feature.
* 'text': a 'string' expressing a stance.
* 'target': a 'string' of the target/topic annotated here.
* 'stance': a class label representing the stance the text expresses towards the target. Full tagset with indices:
### Data Splits
Dataset Creation
----------------
### Curation Rationale
To enable stance detection in Zulu and also to measure domain transfer in translation
### Source Data
#### Initial Data Collection and Normalization
The original data is taken from Semeval2016 task 6: Detecting stance in tweets.,
and then translated manually to Zulu.
#### Who are the source language producers?
English-speaking Twitter users.
### Annotations
#### Annotation process
See Semeval2016 task 6: Detecting stance in tweets.; the annotations are taken from there.
#### Who are the annotators?
See Semeval2016 task 6: Detecting stance in tweets.; the annotations are taken from there.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
While the data is in Zulu, the source text is not from or about Zulu-speakers, and so still expresses the social biases and topics found in English-speaking Twitter users. Further, some of the topics are USA-specific. The sentiments and ideas in this dataset do not represent Zulu speakers.
### Other Known Limitations
The above limitations apply.
Additional Information
----------------------
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Contributions
Author-added dataset @leondz
| [
"### Dataset Summary\n\n\nThis is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.\n\n\nOur paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer.",
"### Supported Tasks and Leaderboards\n\n\n*",
"### Languages\n\n\nZulu ('bcp47:zu')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### zulu\\_stance\n\n\n* Size of downloaded dataset files: 212.54 KiB\n* Size of the generated dataset: 186.76 KiB\n* Total amount of disk used: 399.30KiB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'text': a 'string' expressing a stance.\n* 'target': a 'string' of the target/topic annotated here.\n* 'stance': a class label representing the stance the text expresses towards the target. Full tagset with indices:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo enable stance detection in Zulu and also to measure domain transfer in translation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe original data is taken from Semeval2016 task 6: Detecting stance in tweets.,\nand then translated manually to Zulu.",
"#### Who are the source language producers?\n\n\nEnglish-speaking Twitter users.",
"### Annotations",
"#### Annotation process\n\n\nSee Semeval2016 task 6: Detecting stance in tweets.; the annotations are taken from there.",
"#### Who are the annotators?\n\n\nSee Semeval2016 task 6: Detecting stance in tweets.; the annotations are taken from there.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. User names are preserved.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThere's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.",
"### Discussion of Biases\n\n\nWhile the data is in Zulu, the source text is not from or about Zulu-speakers, and so still expresses the social biases and topics found in English-speaking Twitter users. Further, some of the topics are USA-specific. The sentiments and ideas in this dataset do not represent Zulu speakers.",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Zulu #license-cc-by-4.0 #stance-detection #arxiv-2205.03153 #region-us \n",
"### Dataset Summary\n\n\nThis is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.\n\n\nOur paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer.",
"### Supported Tasks and Leaderboards\n\n\n*",
"### Languages\n\n\nZulu ('bcp47:zu')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### zulu\\_stance\n\n\n* Size of downloaded dataset files: 212.54 KiB\n* Size of the generated dataset: 186.76 KiB\n* Total amount of disk used: 399.30KiB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'text': a 'string' expressing a stance.\n* 'target': a 'string' of the target/topic annotated here.\n* 'stance': a class label representing the stance the text expresses towards the target. Full tagset with indices:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo enable stance detection in Zulu and also to measure domain transfer in translation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe original data is taken from Semeval2016 task 6: Detecting stance in tweets.,\nand then translated manually to Zulu.",
"#### Who are the source language producers?\n\n\nEnglish-speaking Twitter users.",
"### Annotations",
"#### Annotation process\n\n\nSee Semeval2016 task 6: Detecting stance in tweets.; the annotations are taken from there.",
"#### Who are the annotators?\n\n\nSee Semeval2016 task 6: Detecting stance in tweets.; the annotations are taken from there.",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. User names are preserved.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThere's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.",
"### Discussion of Biases\n\n\nWhile the data is in Zulu, the source text is not from or about Zulu-speakers, and so still expresses the social biases and topics found in English-speaking Twitter users. Further, some of the topics are USA-specific. The sentiments and ideas in this dataset do not represent Zulu speakers.",
"### Other Known Limitations\n\n\nThe above limitations apply.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is curated by the paper's authors.",
"### Licensing Information\n\n\nThe authors distribute this data under Creative Commons attribution license, CC-BY 4.0.",
"### Contributions\n\n\nAuthor-added dataset @leondz"
] |
00e360ccbfa82f47287311387af1011f28f7e667 | You should know how to use it:)
Just in case, you can email me [mingzhe at nus.edu.sg] if you need any help. | Elfsong/clef_data | [
"region:us"
] | 2022-04-28T10:31:27+00:00 | {} | 2022-08-29T04:41:54+00:00 | [] | [] | TAGS
#region-us
| You should know how to use it:)
Just in case, you can email me [mingzhe at URL] if you need any help. | [] | [
"TAGS\n#region-us \n"
] |
3fc0666c45d46d03a9bcd43f5e887dda2727328e |
Adapted from https://github.com/nicholas-leonard/word2vec | tomasmcz/word2vec_analogy | [
"license:apache-2.0",
"region:us"
] | 2022-04-28T10:58:50+00:00 | {"license": "apache-2.0"} | 2022-04-28T13:07:56+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
Adapted from URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
a1f29ad15db2594f69420a2abc845cc0bd09a311 |
Possibly a placeholder dataset for the original here: https://huggingface.co/datasets/bigscience-catalogue-data/bias-shades
# Data Statement for SHADES
> **How to use this document:**
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: https://www.aclweb.org/anthology/Q18-1041/ .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "DATASTATEMENT.md". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Bias-Shades
Citation (if available): TODO.
Data set developer(s): This dataset was compiled by dozens of research scientists through the BigScience open science collaboration. Collaborators, representing numerous cultures and languages, joined the project of their own volition.
Data statement author(s): Shayne Longpre, Aurélie Névéol, Shanya Sharma[Add name here if you add/edit the data statement :)].
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
This dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted. An initial set of sentences was inferred from stereotypes expressed in the crowS-pairs data set(Nangia et al.). Native speakers first crafted templates for sentences expressing a stereotype. These templates are marked for gender and plurality of the target nouns, so the template can be reused by substituting different targets. Next, the template-target noun pair combinations were annotated for the veracity/reliability of the expressed stereotype. The resulting sentences express common and less common stereotypes in a variety of cultures and languages.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: en-US, fr-FR, hi-IN, es-DO, ar-LY, ru-RU, de-DE, nl-NL, ta-IN.
* Language variety description: English spoken by native speakers of the United States, native French people from metropolitan France, native Hindi and Tamil speakers from India, Spanish speakers from the Dominican Republic, Arabic speakers from Libya, Russian speakers from Russia, German speakers from Germany, and Dutch speakers from the Netherlands.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Listed below.
Speakers:
* [ADD YOURSELF!]
* Shayne Longpre: English-speaking, male, 28 years old, culturally Canadian.
* Aurélie Névéol: French (native), English and Spanish speaking, female, 44 years old, culturally French (also familiar with American culture)
* Shanya Sharma: Hindi(native), English speaking, female, 24 years old, culturally Indian
* Margaret Mitchell: English, female, mid-30s, U.S.A.
* Maraim Masoud: Arabic, English Speaking female.
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Speaker and annotator contributors listed in section C.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in numerous languages and cultures. They might be upsetting and/or offensive.
Along with these stereotyped statements are annotation judgements of how prevalent/real the expressed stereotypes are in the real world. Some statements were created from templates with substituted target nouns, and therefore may express an uncommon or unlikely stereotype.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
This initiative is part of the BigScience Workshop: https://bigscience.huggingface.co/.
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: [[email protected]](mailto:[email protected]). This document template is licensed as [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the [2020 LREC workshop on Data Statements](https://sites.google.com/uw.edu/data-statements-for-nlp/), by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski. | BigScienceBiasEval/bias-shades | [
"language:ar",
"language:en",
"language:fr",
"language:de",
"language:hi",
"language:ru",
"language:es",
"language:ta",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-04-28T15:46:11+00:00 | {"language": ["ar", "en", "fr", "de", "hi", "ru", "es", "ta"], "license": "cc-by-sa-4.0"} | 2024-01-14T11:45:29+00:00 | [] | [
"ar",
"en",
"fr",
"de",
"hi",
"ru",
"es",
"ta"
] | TAGS
#language-Arabic #language-English #language-French #language-German #language-Hindi #language-Russian #language-Spanish #language-Tamil #license-cc-by-sa-4.0 #region-us
|
Possibly a placeholder dataset for the original here: URL
# Data Statement for SHADES
> How to use this document:
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: URL .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "URL". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Bias-Shades
Citation (if available): TODO.
Data set developer(s): This dataset was compiled by dozens of research scientists through the BigScience open science collaboration. Collaborators, representing numerous cultures and languages, joined the project of their own volition.
Data statement author(s): Shayne Longpre, Aurélie Névéol, Shanya Sharma[Add name here if you add/edit the data statement :)].
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
This dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted. An initial set of sentences was inferred from stereotypes expressed in the crowS-pairs data set(Nangia et al.). Native speakers first crafted templates for sentences expressing a stereotype. These templates are marked for gender and plurality of the target nouns, so the template can be reused by substituting different targets. Next, the template-target noun pair combinations were annotated for the veracity/reliability of the expressed stereotype. The resulting sentences express common and less common stereotypes in a variety of cultures and languages.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: en-US, fr-FR, hi-IN, es-DO, ar-LY, ru-RU, de-DE, nl-NL, ta-IN.
* Language variety description: English spoken by native speakers of the United States, native French people from metropolitan France, native Hindi and Tamil speakers from India, Spanish speakers from the Dominican Republic, Arabic speakers from Libya, Russian speakers from Russia, German speakers from Germany, and Dutch speakers from the Netherlands.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Listed below.
Speakers:
* [ADD YOURSELF!]
* Shayne Longpre: English-speaking, male, 28 years old, culturally Canadian.
* Aurélie Névéol: French (native), English and Spanish speaking, female, 44 years old, culturally French (also familiar with American culture)
* Shanya Sharma: Hindi(native), English speaking, female, 24 years old, culturally Indian
* Margaret Mitchell: English, female, mid-30s, U.S.A.
* Maraim Masoud: Arabic, English Speaking female.
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Speaker and annotator contributors listed in section C.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in numerous languages and cultures. They might be upsetting and/or offensive.
Along with these stereotyped statements are annotation judgements of how prevalent/real the expressed stereotypes are in the real world. Some statements were created from templates with substituted target nouns, and therefore may express an uncommon or unlikely stereotype.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
This initiative is part of the BigScience Workshop: URL
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski. | [
"# Data Statement for SHADES\n\n> How to use this document:\n> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.\n\n> For full details, the best source is the original Data Statements paper, here: URL .\n\n> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as \"URL\". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.\n\n> Only blockquoted content should be deleted; the final about statement should be left intact.\n\nData set name: Bias-Shades\n\nCitation (if available): TODO.\n\nData set developer(s): This dataset was compiled by dozens of research scientists through the BigScience open science collaboration. Collaborators, representing numerous cultures and languages, joined the project of their own volition.\n\nData statement author(s): Shayne Longpre, Aurélie Névéol, Shanya Sharma[Add name here if you add/edit the data statement :)].\n\nOthers who contributed to this document: N/A\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## A. CURATION RATIONALE \n\n> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.\n\nThis dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted. An initial set of sentences was inferred from stereotypes expressed in the crowS-pairs data set(Nangia et al.). Native speakers first crafted templates for sentences expressing a stereotype. These templates are marked for gender and plurality of the target nouns, so the template can be reused by substituting different targets. Next, the template-target noun pair combinations were annotated for the veracity/reliability of the expressed stereotype. The resulting sentences express common and less common stereotypes in a variety of cultures and languages.",
"## B. LANGUAGE VARIETY/VARIETIES\n\n> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., \"English as spoken in Palo Alto, California\", or \"Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin\").\n\n* BCP-47 language tags: en-US, fr-FR, hi-IN, es-DO, ar-LY, ru-RU, de-DE, nl-NL, ta-IN.\n* Language variety description: English spoken by native speakers of the United States, native French people from metropolitan France, native Hindi and Tamil speakers from India, Spanish speakers from the Dominican Republic, Arabic speakers from Libya, Russian speakers from Russia, German speakers from Germany, and Dutch speakers from the Netherlands.",
"## C. CONTRIBUTOR DEMOGRAPHIC\n> ## C. SPEAKER DEMOGRAPHIC\n\n> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: \n\nParticipants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Listed below.\n\nSpeakers:\n* [ADD YOURSELF!]\n* Shayne Longpre: English-speaking, male, 28 years old, culturally Canadian.\n* Aurélie Névéol: French (native), English and Spanish speaking, female, 44 years old, culturally French (also familiar with American culture)\n* Shanya Sharma: Hindi(native), English speaking, female, 24 years old, culturally Indian\n* Margaret Mitchell: English, female, mid-30s, U.S.A.\n* Maraim Masoud: Arabic, English Speaking female.",
"## D. ANNOTATOR DEMOGRAPHIC\n\n> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:\n\nParticipants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Speaker and annotator contributors listed in section C.",
"## E. SPEECH SITUATION\n\nN/A",
"## F. TEXT CHARACTERISTICS\n\n> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.\n\nCollected data is a collection of offensive stereotyped statements in numerous languages and cultures. They might be upsetting and/or offensive.\n\nAlong with these stereotyped statements are annotation judgements of how prevalent/real the expressed stereotypes are in the real world. Some statements were created from templates with substituted target nouns, and therefore may express an uncommon or unlikely stereotype.",
"## G. RECORDING QUALITY\n\nN/A",
"## H. OTHER\n\n> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.",
"## I. PROVENANCE APPENDIX\n\nThis initiative is part of the BigScience Workshop: URL",
"## About this document\n\nA data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.\n\nData Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.\n\nThis version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski."
] | [
"TAGS\n#language-Arabic #language-English #language-French #language-German #language-Hindi #language-Russian #language-Spanish #language-Tamil #license-cc-by-sa-4.0 #region-us \n",
"# Data Statement for SHADES\n\n> How to use this document:\n> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.\n\n> For full details, the best source is the original Data Statements paper, here: URL .\n\n> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as \"URL\". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.\n\n> Only blockquoted content should be deleted; the final about statement should be left intact.\n\nData set name: Bias-Shades\n\nCitation (if available): TODO.\n\nData set developer(s): This dataset was compiled by dozens of research scientists through the BigScience open science collaboration. Collaborators, representing numerous cultures and languages, joined the project of their own volition.\n\nData statement author(s): Shayne Longpre, Aurélie Névéol, Shanya Sharma[Add name here if you add/edit the data statement :)].\n\nOthers who contributed to this document: N/A\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## A. CURATION RATIONALE \n\n> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.\n\nThis dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted. An initial set of sentences was inferred from stereotypes expressed in the crowS-pairs data set(Nangia et al.). Native speakers first crafted templates for sentences expressing a stereotype. These templates are marked for gender and plurality of the target nouns, so the template can be reused by substituting different targets. Next, the template-target noun pair combinations were annotated for the veracity/reliability of the expressed stereotype. The resulting sentences express common and less common stereotypes in a variety of cultures and languages.",
"## B. LANGUAGE VARIETY/VARIETIES\n\n> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., \"English as spoken in Palo Alto, California\", or \"Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin\").\n\n* BCP-47 language tags: en-US, fr-FR, hi-IN, es-DO, ar-LY, ru-RU, de-DE, nl-NL, ta-IN.\n* Language variety description: English spoken by native speakers of the United States, native French people from metropolitan France, native Hindi and Tamil speakers from India, Spanish speakers from the Dominican Republic, Arabic speakers from Libya, Russian speakers from Russia, German speakers from Germany, and Dutch speakers from the Netherlands.",
"## C. CONTRIBUTOR DEMOGRAPHIC\n> ## C. SPEAKER DEMOGRAPHIC\n\n> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: \n\nParticipants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Listed below.\n\nSpeakers:\n* [ADD YOURSELF!]\n* Shayne Longpre: English-speaking, male, 28 years old, culturally Canadian.\n* Aurélie Névéol: French (native), English and Spanish speaking, female, 44 years old, culturally French (also familiar with American culture)\n* Shanya Sharma: Hindi(native), English speaking, female, 24 years old, culturally Indian\n* Margaret Mitchell: English, female, mid-30s, U.S.A.\n* Maraim Masoud: Arabic, English Speaking female.",
"## D. ANNOTATOR DEMOGRAPHIC\n\n> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:\n\nParticipants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Speaker and annotator contributors listed in section C.",
"## E. SPEECH SITUATION\n\nN/A",
"## F. TEXT CHARACTERISTICS\n\n> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.\n\nCollected data is a collection of offensive stereotyped statements in numerous languages and cultures. They might be upsetting and/or offensive.\n\nAlong with these stereotyped statements are annotation judgements of how prevalent/real the expressed stereotypes are in the real world. Some statements were created from templates with substituted target nouns, and therefore may express an uncommon or unlikely stereotype.",
"## G. RECORDING QUALITY\n\nN/A",
"## H. OTHER\n\n> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.",
"## I. PROVENANCE APPENDIX\n\nThis initiative is part of the BigScience Workshop: URL",
"## About this document\n\nA data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.\n\nData Statements are from the University of Washington. Contact: datastatements@URL. This document template is licensed as CC0.\n\nThis version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the 2020 LREC workshop on Data Statements, by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski."
] |
f6eb365a04aacbd3b1055cf1cfea9edf1d6355f7 |
# Dataset Card for sci-definition
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/talaugust/definition-complexity
- **Repository:** https://github.com/talaugust/definition-complexity
- **Paper:** https://openreview.net/forum?id=S8x2x5EpWj
- **Point of Contact:** Tal August ([email protected])
### Dataset Summary
This is a dataset of scientific and medical definitions. They are drawn from two sources:
(MedQuAD)[https://github.com/abachaa/MedQuAD]
(Wikipedia Science Glossaries)[https://en.wikipedia.org/wiki/Category:Glossaries_of_science]
For both sources, all terms and definitions are formatted as "What is (are) X?" or "Do you have more information about X?" with the answer being the definition of X.
### Supported Tasks and Leaderboards
The task this dataset was created for was generating definitions of scientific terms. It has additionally be used to vary the complexity of generating definitions. Both versions of this task are described in the original paper.
### Languages
English
## Dataset Structure
### Data Instances
{
"question": "What is (are) complement component 2 deficiency?",
"category": "medquad",
"first_sentence": "Complement component 2 deficiency is a disorder that causes the immune system to malfunction, resulting in a form of immunodeficiency.",
"support_doc_sparse_s2orc":"The occurrence of membranoproliferative glomerulonephritis in a 13 year old boy with inherited complete deficiency of the second component of complement (C2) is described here....<P>...."
}
### Data Fields
question: The question containing the term
category: either medquad or wikipedia
first_sentence: The answer (or definition)
support_doc_sparse_s2orc: the support document, made up of 10 journal abstracts related to the term. Journal abstracts are drawn from S2ORC (https://github.com/allenai/s2orc) using ElasticSearch.
### Data Splits
Splits are in a test, train, and dev. As a note, the original results reported in the paper on generating definitions was on the dev set and training was done on a split of the train set. The original test set was reserved for evaluating complexity control.
## Dataset Creation
### Curation Rationale
We collected from these two sources because they contain expert-written and factually correct information about scientific and medical terms. For the MedQuAD data, we filter for questions of the form: What is (are) X? or Do you have more information on X? We then format the Wikipedia science glossaries in the same form ("What is (are) X?).
### Source Data
#### Initial Data Collection and Normalization
The data was collected by filtering the original MedQuAD dataset and drawing on Wikpiedia science glossaries. This occurred between September 2020-March 2021.
#### Who are the source language producers?
NIH website (MedQuAD), and Wikpedia editors
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
The goal of this dataset is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information.
### Discussion of Biases
The texts in the datasets we train our models on are in General or Academic American English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Please refer to the (MedQuAD)[https://github.com/abachaa/MedQuAD] license
### Citation Information
@inproceedings{august-2022-definition-complexity,
title={Generating Scientific Definitions with Controllable Complexity},
author={Tal August, Katharina Reinecke, and Noah A. Smith},
booktitle={ACL},
year={2022}
} | talaugust/sci-definition | [
"region:us"
] | 2022-04-28T17:46:51+00:00 | {} | 2022-05-17T21:10:49+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for sci-definition
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Point of Contact: Tal August (taugust@URL)
### Dataset Summary
This is a dataset of scientific and medical definitions. They are drawn from two sources:
(MedQuAD)[URL
(Wikipedia Science Glossaries)[URL
For both sources, all terms and definitions are formatted as "What is (are) X?" or "Do you have more information about X?" with the answer being the definition of X.
### Supported Tasks and Leaderboards
The task this dataset was created for was generating definitions of scientific terms. It has additionally be used to vary the complexity of generating definitions. Both versions of this task are described in the original paper.
### Languages
English
## Dataset Structure
### Data Instances
{
"question": "What is (are) complement component 2 deficiency?",
"category": "medquad",
"first_sentence": "Complement component 2 deficiency is a disorder that causes the immune system to malfunction, resulting in a form of immunodeficiency.",
"support_doc_sparse_s2orc":"The occurrence of membranoproliferative glomerulonephritis in a 13 year old boy with inherited complete deficiency of the second component of complement (C2) is described here....<P>...."
}
### Data Fields
question: The question containing the term
category: either medquad or wikipedia
first_sentence: The answer (or definition)
support_doc_sparse_s2orc: the support document, made up of 10 journal abstracts related to the term. Journal abstracts are drawn from S2ORC (URL using ElasticSearch.
### Data Splits
Splits are in a test, train, and dev. As a note, the original results reported in the paper on generating definitions was on the dev set and training was done on a split of the train set. The original test set was reserved for evaluating complexity control.
## Dataset Creation
### Curation Rationale
We collected from these two sources because they contain expert-written and factually correct information about scientific and medical terms. For the MedQuAD data, we filter for questions of the form: What is (are) X? or Do you have more information on X? We then format the Wikipedia science glossaries in the same form ("What is (are) X?).
### Source Data
#### Initial Data Collection and Normalization
The data was collected by filtering the original MedQuAD dataset and drawing on Wikpiedia science glossaries. This occurred between September 2020-March 2021.
#### Who are the source language producers?
NIH website (MedQuAD), and Wikpedia editors
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
The goal of this dataset is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information.
### Discussion of Biases
The texts in the datasets we train our models on are in General or Academic American English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve.
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Please refer to the (MedQuAD)[URL license
@inproceedings{august-2022-definition-complexity,
title={Generating Scientific Definitions with Controllable Complexity},
author={Tal August, Katharina Reinecke, and Noah A. Smith},
booktitle={ACL},
year={2022}
} | [
"# Dataset Card for sci-definition",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n\n- Point of Contact: Tal August (taugust@URL)",
"### Dataset Summary\n\nThis is a dataset of scientific and medical definitions. They are drawn from two sources: \n\n(MedQuAD)[URL\n(Wikipedia Science Glossaries)[URL\n\nFor both sources, all terms and definitions are formatted as \"What is (are) X?\" or \"Do you have more information about X?\" with the answer being the definition of X.",
"### Supported Tasks and Leaderboards\n\nThe task this dataset was created for was generating definitions of scientific terms. It has additionally be used to vary the complexity of generating definitions. Both versions of this task are described in the original paper.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\n {\n \"question\": \"What is (are) complement component 2 deficiency?\",\n \"category\": \"medquad\",\n \"first_sentence\": \"Complement component 2 deficiency is a disorder that causes the immune system to malfunction, resulting in a form of immunodeficiency.\",\n \"support_doc_sparse_s2orc\":\"The occurrence of membranoproliferative glomerulonephritis in a 13 year old boy with inherited complete deficiency of the second component of complement (C2) is described here....<P>....\"\n }",
"### Data Fields\n\nquestion: The question containing the term\n\ncategory: either medquad or wikipedia\n\nfirst_sentence: The answer (or definition)\n\nsupport_doc_sparse_s2orc: the support document, made up of 10 journal abstracts related to the term. Journal abstracts are drawn from S2ORC (URL using ElasticSearch.",
"### Data Splits\n\nSplits are in a test, train, and dev. As a note, the original results reported in the paper on generating definitions was on the dev set and training was done on a split of the train set. The original test set was reserved for evaluating complexity control.",
"## Dataset Creation",
"### Curation Rationale\n\nWe collected from these two sources because they contain expert-written and factually correct information about scientific and medical terms. For the MedQuAD data, we filter for questions of the form: \u001cWhat is (are) X?\u001d or \u001cDo you have more information on X?\u001d We then format the Wikipedia science glossaries in the same form (\"What is (are) X?\u001d).",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe data was collected by filtering the original MedQuAD dataset and drawing on Wikpiedia science glossaries. This occurred between September 2020-March 2021.",
"#### Who are the source language producers?\n\nNIH website (MedQuAD), and Wikpedia editors",
"### Annotations",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nN/A",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe goal of this dataset is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information.",
"### Discussion of Biases\n\nThe texts in the datasets we train our models on are in General or Academic American English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPlease refer to the (MedQuAD)[URL license\n\n\n\n @inproceedings{august-2022-definition-complexity,\n title={Generating Scientific Definitions with Controllable Complexity},\n author={Tal August, Katharina Reinecke, and Noah A. Smith},\n booktitle={ACL},\n year={2022}\n }"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for sci-definition",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n\n- Point of Contact: Tal August (taugust@URL)",
"### Dataset Summary\n\nThis is a dataset of scientific and medical definitions. They are drawn from two sources: \n\n(MedQuAD)[URL\n(Wikipedia Science Glossaries)[URL\n\nFor both sources, all terms and definitions are formatted as \"What is (are) X?\" or \"Do you have more information about X?\" with the answer being the definition of X.",
"### Supported Tasks and Leaderboards\n\nThe task this dataset was created for was generating definitions of scientific terms. It has additionally be used to vary the complexity of generating definitions. Both versions of this task are described in the original paper.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\n {\n \"question\": \"What is (are) complement component 2 deficiency?\",\n \"category\": \"medquad\",\n \"first_sentence\": \"Complement component 2 deficiency is a disorder that causes the immune system to malfunction, resulting in a form of immunodeficiency.\",\n \"support_doc_sparse_s2orc\":\"The occurrence of membranoproliferative glomerulonephritis in a 13 year old boy with inherited complete deficiency of the second component of complement (C2) is described here....<P>....\"\n }",
"### Data Fields\n\nquestion: The question containing the term\n\ncategory: either medquad or wikipedia\n\nfirst_sentence: The answer (or definition)\n\nsupport_doc_sparse_s2orc: the support document, made up of 10 journal abstracts related to the term. Journal abstracts are drawn from S2ORC (URL using ElasticSearch.",
"### Data Splits\n\nSplits are in a test, train, and dev. As a note, the original results reported in the paper on generating definitions was on the dev set and training was done on a split of the train set. The original test set was reserved for evaluating complexity control.",
"## Dataset Creation",
"### Curation Rationale\n\nWe collected from these two sources because they contain expert-written and factually correct information about scientific and medical terms. For the MedQuAD data, we filter for questions of the form: \u001cWhat is (are) X?\u001d or \u001cDo you have more information on X?\u001d We then format the Wikipedia science glossaries in the same form (\"What is (are) X?\u001d).",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe data was collected by filtering the original MedQuAD dataset and drawing on Wikpiedia science glossaries. This occurred between September 2020-March 2021.",
"#### Who are the source language producers?\n\nNIH website (MedQuAD), and Wikpedia editors",
"### Annotations",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nN/A",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe goal of this dataset is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information.",
"### Discussion of Biases\n\nThe texts in the datasets we train our models on are in General or Academic American English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPlease refer to the (MedQuAD)[URL license\n\n\n\n @inproceedings{august-2022-definition-complexity,\n title={Generating Scientific Definitions with Controllable Complexity},\n author={Tal August, Katharina Reinecke, and Noah A. Smith},\n booktitle={ACL},\n year={2022}\n }"
] |
96349b9bd7536aac67f892f5cf36197cd49ea722 | # Dataset Card for FactRuEval-2016
## Dataset Description
- **Point of Contact:** [Guskov Sergey](https://gusevski.com)
### Dataset Summary
Evaluation of [Named Entity Recognition](https://www.dialog-21.ru/media/3430/starostinaetal.pdf) and Fact Extraction Systems for Russian.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
- `token-classification`: The dataset can be used to train a model for [NER], which consists in [Token Classification]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
### Languages
RU.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
'data': [{'id':'', 'tokens':[], 'ner_tags':[]},...],
...
}
```
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `id`: order id
- `tokens`: list of tokens
- `ner_tags`: list of ner tags
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
MIT
| gusevski/factrueval2016 | [
"arxiv:2005.00614",
"region:us"
] | 2022-04-29T05:41:12+00:00 | {} | 2022-04-29T19:34:48+00:00 | [
"2005.00614"
] | [] | TAGS
#arxiv-2005.00614 #region-us
| Dataset Card for FactRuEval-2016
================================
Dataset Description
-------------------
* Point of Contact: Guskov Sergey
### Dataset Summary
Evaluation of Named Entity Recognition and Fact Extraction Systems for Russian.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the 'task-category-tag' with an appropriate 'other:other-task-name').
* 'token-classification': The dataset can be used to train a model for [NER], which consists in [Token Classification]. Success on this task is typically measured by achieving a *high/low* metric name. The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
### Languages
RU.
Dataset Structure
-----------------
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
* 'id': order id
* 'tokens': list of tokens
* 'ner\_tags': list of ner tags
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
Dataset Creation
----------------
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
----------------------
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
MIT
| [
"### Dataset Summary\n\n\nEvaluation of Named Entity Recognition and Fact Extraction Systems for Russian.",
"### Supported Tasks and Leaderboards\n\n\nFor each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the 'task-category-tag' with an appropriate 'other:other-task-name').\n\n\n* 'token-classification': The dataset can be used to train a model for [NER], which consists in [Token Classification]. Success on this task is typically measured by achieving a *high/low* metric name. The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.",
"### Languages\n\n\nRU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nProvide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.\n\n\nProvide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.",
"### Data Fields\n\n\nList and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.\n\n\n* 'id': order id\n* 'tokens': list of tokens\n* 'ner\\_tags': list of ner tags",
"### Data Splits\n\n\nDescribe and name the splits in the dataset if there are more than one.\n\n\nDescribe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.\n\n\nProvide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nWhat need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?",
"### Source Data\n\n\nThis section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)",
"#### Initial Data Collection and Normalization\n\n\nDescribe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.\n\n\nIf data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.\n\n\nIf the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.",
"#### Who are the source language producers?\n\n\nState whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.\n\n\nIf available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.\n\n\nDescribe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.\n\n\nDescribe other people represented or mentioned in the data. Where possible, link to references for the information.",
"### Annotations\n\n\nIf the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.",
"#### Annotation process\n\n\nIf applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.",
"#### Who are the annotators?\n\n\nIf annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.\n\n\nDescribe the people or systems who originally created the annotations and their selection criteria if applicable.\n\n\nIf available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.\n\n\nDescribe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.",
"### Personal and Sensitive Information\n\n\nState whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).\n\n\nState whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).\n\n\nIf efforts were made to anonymize the data, describe the anonymization process.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nPlease discuss some of the ways you believe the use of this dataset will impact society.\n\n\nThe statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.\n\n\nAlso describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.",
"### Discussion of Biases\n\n\nProvide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.\n\n\nFor Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.\n\n\nIf analyses have been run quantifying these biases, please add brief summaries and links to the studies here.",
"### Other Known Limitations\n\n\nIf studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nList the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.",
"### Licensing Information\n\n\nMIT"
] | [
"TAGS\n#arxiv-2005.00614 #region-us \n",
"### Dataset Summary\n\n\nEvaluation of Named Entity Recognition and Fact Extraction Systems for Russian.",
"### Supported Tasks and Leaderboards\n\n\nFor each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the 'task-category-tag' with an appropriate 'other:other-task-name').\n\n\n* 'token-classification': The dataset can be used to train a model for [NER], which consists in [Token Classification]. Success on this task is typically measured by achieving a *high/low* metric name. The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.",
"### Languages\n\n\nRU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nProvide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.\n\n\nProvide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.",
"### Data Fields\n\n\nList and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.\n\n\n* 'id': order id\n* 'tokens': list of tokens\n* 'ner\\_tags': list of ner tags",
"### Data Splits\n\n\nDescribe and name the splits in the dataset if there are more than one.\n\n\nDescribe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.\n\n\nProvide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nWhat need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?",
"### Source Data\n\n\nThis section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)",
"#### Initial Data Collection and Normalization\n\n\nDescribe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.\n\n\nIf data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.\n\n\nIf the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.",
"#### Who are the source language producers?\n\n\nState whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.\n\n\nIf available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.\n\n\nDescribe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.\n\n\nDescribe other people represented or mentioned in the data. Where possible, link to references for the information.",
"### Annotations\n\n\nIf the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.",
"#### Annotation process\n\n\nIf applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.",
"#### Who are the annotators?\n\n\nIf annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.\n\n\nDescribe the people or systems who originally created the annotations and their selection criteria if applicable.\n\n\nIf available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.\n\n\nDescribe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.",
"### Personal and Sensitive Information\n\n\nState whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).\n\n\nState whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).\n\n\nIf efforts were made to anonymize the data, describe the anonymization process.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nPlease discuss some of the ways you believe the use of this dataset will impact society.\n\n\nThe statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.\n\n\nAlso describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.",
"### Discussion of Biases\n\n\nProvide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.\n\n\nFor Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.\n\n\nIf analyses have been run quantifying these biases, please add brief summaries and links to the studies here.",
"### Other Known Limitations\n\n\nIf studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nList the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.",
"### Licensing Information\n\n\nMIT"
] |
30d1fddbdb897932513e5974736b46c6fe6b6ceb | # AutoTrain Dataset for project: procell-expert
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project procell-expert.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "We studied the antitumor activity and toxicity of ZD1694 (tomudex), a specific inhibitor of thymidyl[...]",
"target": 0
},
{
"text": "Here we provide data that human prostate cancer cell lines express the platelet-type isoform of 12-L[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['accept', 'reject'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 155 |
| valid | 40 |
| Mim/autotrain-data-procell-expert | [
"task_categories:text-classification",
"region:us"
] | 2022-04-29T07:23:02+00:00 | {"task_categories": ["text-classification"]} | 2022-04-29T07:27:03+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: procell-expert
=============================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project procell-expert.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
474fa5e109d3675a21add2fc4cc0f201159545ec |
# WikiHow Entries
Contains wikiHow question, answer and summary in `.json` format. | soyasis/wikihow_small | [
"language:en",
"license:mit",
"region:us"
] | 2022-04-29T09:17:35+00:00 | {"language": "en", "license": "mit"} | 2022-04-29T09:24:34+00:00 | [] | [
"en"
] | TAGS
#language-English #license-mit #region-us
|
# WikiHow Entries
Contains wikiHow question, answer and summary in '.json' format. | [
"# WikiHow Entries\n\nContains wikiHow question, answer and summary in '.json' format."
] | [
"TAGS\n#language-English #license-mit #region-us \n",
"# WikiHow Entries\n\nContains wikiHow question, answer and summary in '.json' format."
] |
0653eeaccbe5f0a9738220e0b1615f791da248b1 |
# Dataset Card for WMT21 Metrics Task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WMT21 Metrics Shared Task](https://www.statmt.org/wmt21/metrics-task.html)
- **Repository:** [MT Metrics Eval Github Repository](https://github.com/google-research/mt-metrics-eval)
- **Paper:** [Paper](https://aclanthology.org/2021.wmt-1.73/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset comprises twenty language pairs:
- Bengali-Hindi (`bn-hi`)
- Czech-English (`cs-en`)
- German-English (`de-en`)
- German-French (`de-fr`)
- English-Czech (`en-cs`)
- English-German (`en-de`)
- English-Hausa (`en-ha`)
- English-Icelandic (`en-is`)
- English-Japanese (`en-ja`)
- English-Russian (`en-ru`)
- English-Chinese (`en-zh`)
- French-German (`fr-de`)
- Hausa-English (`ha-en`)
- Hindi-Bengali (`hi-bn`)
- Icelandic-English (`is-en`)
- Japenese-English (`ja-en`)
- Russian-English (`ru-en`)
- Xhosa-Zulu (`xh-zu`)
- Chinese-English (`zh-en`)
- Zulu-Xhosa (`zu-xh`)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| muibk/wmt21_metrics_task | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"license:unknown",
"region:us"
] | 2022-04-29T09:52:47+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found", "machine-generated", "expert-generated"], "language": ["bn-hi", "cs-en", "de-en", "de-fr", "en-cs", "en-de", "en-ha", "en-is", "en-ja", "en-ru", "en-zh", "fr-de", "ha-en", "hi-bn", "is-en", "ja-en", "ru-en", "xh-zh", "zh-en", "zu-xh"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["translation"], "task_ids": [], "pretty_name": "WMT21 Metrics Shared Task"} | 2022-07-12T12:13:25+00:00 | [] | [
"bn-hi",
"cs-en",
"de-en",
"de-fr",
"en-cs",
"en-de",
"en-ha",
"en-is",
"en-ja",
"en-ru",
"en-zh",
"fr-de",
"ha-en",
"hi-bn",
"is-en",
"ja-en",
"ru-en",
"xh-zh",
"zh-en",
"zu-xh"
] | TAGS
#task_categories-translation #annotations_creators-expert-generated #language_creators-found #language_creators-machine-generated #language_creators-expert-generated #multilinguality-translation #size_categories-100K<n<1M #license-unknown #region-us
|
# Dataset Card for WMT21 Metrics Task
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: WMT21 Metrics Shared Task
- Repository: MT Metrics Eval Github Repository
- Paper: Paper
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
The dataset comprises twenty language pairs:
- Bengali-Hindi ('bn-hi')
- Czech-English ('cs-en')
- German-English ('de-en')
- German-French ('de-fr')
- English-Czech ('en-cs')
- English-German ('en-de')
- English-Hausa ('en-ha')
- English-Icelandic ('en-is')
- English-Japanese ('en-ja')
- English-Russian ('en-ru')
- English-Chinese ('en-zh')
- French-German ('fr-de')
- Hausa-English ('ha-en')
- Hindi-Bengali ('hi-bn')
- Icelandic-English ('is-en')
- Japenese-English ('ja-en')
- Russian-English ('ru-en')
- Xhosa-Zulu ('xh-zu')
- Chinese-English ('zh-en')
- Zulu-Xhosa ('zu-xh')
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for WMT21 Metrics Task",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: WMT21 Metrics Shared Task\n- Repository: MT Metrics Eval Github Repository\n- Paper: Paper",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe dataset comprises twenty language pairs:\n- Bengali-Hindi ('bn-hi')\n- Czech-English ('cs-en')\n- German-English ('de-en')\n- German-French ('de-fr')\n- English-Czech ('en-cs')\n- English-German ('en-de')\n- English-Hausa ('en-ha')\n- English-Icelandic ('en-is')\n- English-Japanese ('en-ja')\n- English-Russian ('en-ru')\n- English-Chinese ('en-zh')\n- French-German ('fr-de')\n- Hausa-English ('ha-en')\n- Hindi-Bengali ('hi-bn')\n- Icelandic-English ('is-en')\n- Japenese-English ('ja-en')\n- Russian-English ('ru-en')\n- Xhosa-Zulu ('xh-zu')\n- Chinese-English ('zh-en')\n- Zulu-Xhosa ('zu-xh')",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-found #language_creators-machine-generated #language_creators-expert-generated #multilinguality-translation #size_categories-100K<n<1M #license-unknown #region-us \n",
"# Dataset Card for WMT21 Metrics Task",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: WMT21 Metrics Shared Task\n- Repository: MT Metrics Eval Github Repository\n- Paper: Paper",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe dataset comprises twenty language pairs:\n- Bengali-Hindi ('bn-hi')\n- Czech-English ('cs-en')\n- German-English ('de-en')\n- German-French ('de-fr')\n- English-Czech ('en-cs')\n- English-German ('en-de')\n- English-Hausa ('en-ha')\n- English-Icelandic ('en-is')\n- English-Japanese ('en-ja')\n- English-Russian ('en-ru')\n- English-Chinese ('en-zh')\n- French-German ('fr-de')\n- Hausa-English ('ha-en')\n- Hindi-Bengali ('hi-bn')\n- Icelandic-English ('is-en')\n- Japenese-English ('ja-en')\n- Russian-English ('ru-en')\n- Xhosa-Zulu ('xh-zu')\n- Chinese-English ('zh-en')\n- Zulu-Xhosa ('zu-xh')",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
3d793d9bfee3ee10313e3fa736c80d310c3a7209 |
# Dataset Card for Common Voice Corpus 9.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 20217 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 14973 validated hours in 93 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_9_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| mozilla-foundation/common_voice_9_0 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2022-04-29T15:49:21+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["n<1K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["100K<n<1M"], "bg": ["1K<n<10K"], "bn": ["100K<n<1M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["10K<n<100K"], "cnh": ["1K<n<10K"], "cs": ["10K<n<100K"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["1K<n<10K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["100K<n<1M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["10K<n<100K"], "ga-IE": ["1K<n<10K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["1K<n<10K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "it": ["100K<n<1M"], "ja": ["10K<n<100K"], "ka": ["1K<n<10K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lt": ["10K<n<100K"], "lv": ["1K<n<10K"], "mdf": ["n<1K"], "mhr": ["10K<n<100K"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["1K<n<10K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sk": ["10K<n<100K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "tig": ["n<1K"], "tok": ["1K<n<10K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["10K<n<100K"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yue": ["10K<n<100K"], "zh-CN": ["10K<n<100K"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 9.0", "language_bcp47": ["ab", "ar", "as", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "it", "ja", "ka", "kab", "kk", "kmr", "ky", "lg", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mt", "myv", "nan-tw", "nl", "nn-NO", "or", "pa-IN", "pl", "pt", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sk", "sl", "sr", "sv-SE", "sw", "ta", "th", "tig", "tok", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "vot", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."} | 2023-07-29T15:00:12+00:00 | [
"1912.06670"
] | [] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us
|
# Dataset Card for Common Voice Corpus 9.0
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: Anton Lozhkov
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 20217 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 14973 validated hours in 93 languages, but more voices and languages are always added.
Take a look at the Languages page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
Speech Bench
### Languages
## Dataset Structure
### Data Instances
A typical data point comprises the 'path' to the audio file and its 'sentence'.
Additional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.
### Data Fields
'client_id' ('string'): An id for which client (voice) made the recording
'path' ('string'): The path to the audio file
'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
'sentence' ('string'): The sentence the user was prompted to speak
'up_votes' ('int64'): How many upvotes the audio file has received from reviewers
'down_votes' ('int64'): How many downvotes the audio file has received from reviewers
'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')
'gender' ('string'): The gender of the speaker
'accent' ('string'): Accent of the speaker
'locale' ('string'): The locale of the speaker
'segment' ('string'): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Public Domain, CC-0
| [
"# Dataset Card for Common Voice Corpus 9.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Anton Lozhkov",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 20217 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 14973 validated hours in 93 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Speech Bench",
"### Languages",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n",
"# Dataset Card for Common Voice Corpus 9.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Anton Lozhkov",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 20217 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 14973 validated hours in 93 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Speech Bench",
"### Languages",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
8a8252c8c8e8c9a8f7ecaeb27bae1ac8b2313ab0 | Dataset containing city, country, region, and continents alongside their longitude and latitude co-ordinates. Cartesian coordinates are provided in x, y, z features. | jamescalam/world-cities-geo | [
"region:us"
] | 2022-04-29T15:54:48+00:00 | {} | 2022-04-29T17:34:46+00:00 | [] | [] | TAGS
#region-us
| Dataset containing city, country, region, and continents alongside their longitude and latitude co-ordinates. Cartesian coordinates are provided in x, y, z features. | [] | [
"TAGS\n#region-us \n"
] |
03f4a9bf2ec961792476aee76a9fa8722ff8dc1e |
# VQGAN Pairs
This dataset contains ~2.4 million image pairs intended for improvement of image quality in VQGAN predictions. Each pair consists of:
- A 512x512 crop of an image taken from Open Images.
- A 256x256 image encoded and decoded using VQGAN, corresponding to the same image crop as the original.
This is the VQGAN implementation that was used for encoding and decoding: https://github.com/patil-suraj/vqgan-jax
# License
This dataset is created using Open Images, which has the following license:
The annotations are licensed by Google LLC under CC BY 4.0 license. The images are listed as having a CC BY 2.0 license. Note: while we tried to identify images that are licensed under a Creative Commons Attribution license, we make no representations or warranties regarding the license status of each image and you should verify the license for each image yourself. | dalle-mini/vqgan-pairs | [
"task_categories:other",
"source_datasets:Open Images",
"license:cc-by-4.0",
"license:cc-by-2.0",
"license:unknown",
"super-resolution",
"image-enhancement",
"region:us"
] | 2022-04-29T18:08:33+00:00 | {"license": ["cc-by-4.0", "cc-by-2.0", "unknown"], "source_datasets": ["Open Images"], "task_categories": ["other"], "task_ids": [], "pretty_name": "VQGAN Pairs", "tags": ["super-resolution", "image-enhancement"]} | 2022-10-25T12:56:57+00:00 | [] | [] | TAGS
#task_categories-other #source_datasets-Open Images #license-cc-by-4.0 #license-cc-by-2.0 #license-unknown #super-resolution #image-enhancement #region-us
|
# VQGAN Pairs
This dataset contains ~2.4 million image pairs intended for improvement of image quality in VQGAN predictions. Each pair consists of:
- A 512x512 crop of an image taken from Open Images.
- A 256x256 image encoded and decoded using VQGAN, corresponding to the same image crop as the original.
This is the VQGAN implementation that was used for encoding and decoding: URL
# License
This dataset is created using Open Images, which has the following license:
The annotations are licensed by Google LLC under CC BY 4.0 license. The images are listed as having a CC BY 2.0 license. Note: while we tried to identify images that are licensed under a Creative Commons Attribution license, we make no representations or warranties regarding the license status of each image and you should verify the license for each image yourself. | [
"# VQGAN Pairs\n\nThis dataset contains ~2.4 million image pairs intended for improvement of image quality in VQGAN predictions. Each pair consists of:\n- A 512x512 crop of an image taken from Open Images.\n- A 256x256 image encoded and decoded using VQGAN, corresponding to the same image crop as the original.\n\nThis is the VQGAN implementation that was used for encoding and decoding: URL",
"# License\nThis dataset is created using Open Images, which has the following license:\nThe annotations are licensed by Google LLC under CC BY 4.0 license. The images are listed as having a CC BY 2.0 license. Note: while we tried to identify images that are licensed under a Creative Commons Attribution license, we make no representations or warranties regarding the license status of each image and you should verify the license for each image yourself."
] | [
"TAGS\n#task_categories-other #source_datasets-Open Images #license-cc-by-4.0 #license-cc-by-2.0 #license-unknown #super-resolution #image-enhancement #region-us \n",
"# VQGAN Pairs\n\nThis dataset contains ~2.4 million image pairs intended for improvement of image quality in VQGAN predictions. Each pair consists of:\n- A 512x512 crop of an image taken from Open Images.\n- A 256x256 image encoded and decoded using VQGAN, corresponding to the same image crop as the original.\n\nThis is the VQGAN implementation that was used for encoding and decoding: URL",
"# License\nThis dataset is created using Open Images, which has the following license:\nThe annotations are licensed by Google LLC under CC BY 4.0 license. The images are listed as having a CC BY 2.0 license. Note: while we tried to identify images that are licensed under a Creative Commons Attribution license, we make no representations or warranties regarding the license status of each image and you should verify the license for each image yourself."
] |
3b03eda1bd275edf2d9caaccf07d32d2c237c0d2 |
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.
The dataset is ported from Protinfer: https://github.com/google-research/proteinfer.
The EC-labels are extracted and indexed, the mapping is provided in `idx_mapping.json`. Proteins without EC tags are removed.
| DanielHesslow/SwissProt-EC | [
"Protein",
"Enzyme Commission",
"EC",
"region:us"
] | 2022-04-29T18:46:46+00:00 | {"language": ["protein sequences"], "datasets": ["Swissprot"], "tags": ["Protein", "Enzyme Commission", "EC"]} | 2022-04-30T14:12:33+00:00 | [] | [
"protein sequences"
] | TAGS
#Protein #Enzyme Commission #EC #region-us
|
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.
The dataset is ported from Protinfer: URL
The EC-labels are extracted and indexed, the mapping is provided in 'idx_mapping.json'. Proteins without EC tags are removed.
| [] | [
"TAGS\n#Protein #Enzyme Commission #EC #region-us \n"
] |
0db4e0ea3153c4caa5c49e7387f65b78a8996148 |
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with PFam labels.
The dataset is ported from Protinfer: https://github.com/google-research/proteinfer.
The Pfam-labels are extracted and indexed, the mapping is provided in `idx_mapping.json`. Proteins without Pfam tags are removed.
| DanielHesslow/SwissProt-Pfam | [
"Protein",
"PFam",
"region:us"
] | 2022-04-29T18:52:56+00:00 | {"language": ["protein sequences"], "datasets": ["Swissprot"], "tags": ["Protein", "PFam"]} | 2022-04-30T14:15:55+00:00 | [] | [
"protein sequences"
] | TAGS
#Protein #PFam #region-us
|
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with PFam labels.
The dataset is ported from Protinfer: URL
The Pfam-labels are extracted and indexed, the mapping is provided in 'idx_mapping.json'. Proteins without Pfam tags are removed.
| [] | [
"TAGS\n#Protein #PFam #region-us \n"
] |
f64128a2e9e7a2756daacee8cc00e9b86142e19e |
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Gene Ontology labels.
The dataset is ported from Protinfer: https://github.com/google-research/proteinfer.
The GO-labels are extracted and indexed, the mapping is provided in `idx_mapping.json`. Proteins without GO tags are removed.
| DanielHesslow/SwissProt-GO | [
"Protein",
"Gene Ontology",
"GO",
"region:us"
] | 2022-04-29T18:53:34+00:00 | {"language": ["protein sequences"], "datasets": ["Swissprot"], "tags": ["Protein", "Gene Ontology", "GO"]} | 2022-04-30T14:16:48+00:00 | [] | [
"protein sequences"
] | TAGS
#Protein #Gene Ontology #GO #region-us
|
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Gene Ontology labels.
The dataset is ported from Protinfer: URL
The GO-labels are extracted and indexed, the mapping is provided in 'idx_mapping.json'. Proteins without GO tags are removed.
| [] | [
"TAGS\n#Protein #Gene Ontology #GO #region-us \n"
] |
74c9c46cc27003593171ef289c364f42d0f4286b |
# Dataset
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.
The dataset is ported from Protinfer: https://github.com/google-research/proteinfer.
The leaf level EC-labels are extracted and indexed, the mapping is provided in `idx_mapping.json`. Proteins without leaf-level-EC tags are removed.
## Example
The protein Q87BZ2 have the following EC tags.
EC:2.-.-.- (Transferases)
EC:2.7.-.- (Transferring phosphorus-containing groups)
EC:2.7.1.- (Phosphotransferases with an alcohol group as acceptor)
EC:2.7.1.30 (Glycerol kinase)
We only extract the leaf level labels, here EC:2.7.1.30, corresponding to glycerol kinase.
| lightonai/SwissProt-EC-leaf | [
"Protein",
"Enzyme Commission",
"region:us"
] | 2022-04-30T14:14:51+00:00 | {"language": ["protein sequences"], "datasets": ["Swissprot"], "tags": ["Protein", "Enzyme Commission"]} | 2022-05-05T08:54:42+00:00 | [] | [
"protein sequences"
] | TAGS
#Protein #Enzyme Commission #region-us
|
# Dataset
Swissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.
The dataset is ported from Protinfer: URL
The leaf level EC-labels are extracted and indexed, the mapping is provided in 'idx_mapping.json'. Proteins without leaf-level-EC tags are removed.
## Example
The protein Q87BZ2 have the following EC tags.
EC:2.-.-.- (Transferases)
EC:2.7.-.- (Transferring phosphorus-containing groups)
EC:2.7.1.- (Phosphotransferases with an alcohol group as acceptor)
EC:2.7.1.30 (Glycerol kinase)
We only extract the leaf level labels, here EC:2.7.1.30, corresponding to glycerol kinase.
| [
"# Dataset\n\nSwissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.\n\nThe dataset is ported from Protinfer: URL\n\nThe leaf level EC-labels are extracted and indexed, the mapping is provided in 'idx_mapping.json'. Proteins without leaf-level-EC tags are removed.",
"## Example\nThe protein Q87BZ2 have the following EC tags.\n\n EC:2.-.-.- (Transferases)\n EC:2.7.-.- (Transferring phosphorus-containing groups)\n EC:2.7.1.- (Phosphotransferases with an alcohol group as acceptor)\n EC:2.7.1.30 (Glycerol kinase)\n\nWe only extract the leaf level labels, here EC:2.7.1.30, corresponding to glycerol kinase."
] | [
"TAGS\n#Protein #Enzyme Commission #region-us \n",
"# Dataset\n\nSwissprot is a high quality manually annotated protein database. The dataset contains annotations with the functional properties of the proteins. Here we extract proteins with Enzyme Commission labels.\n\nThe dataset is ported from Protinfer: URL\n\nThe leaf level EC-labels are extracted and indexed, the mapping is provided in 'idx_mapping.json'. Proteins without leaf-level-EC tags are removed.",
"## Example\nThe protein Q87BZ2 have the following EC tags.\n\n EC:2.-.-.- (Transferases)\n EC:2.7.-.- (Transferring phosphorus-containing groups)\n EC:2.7.1.- (Phosphotransferases with an alcohol group as acceptor)\n EC:2.7.1.30 (Glycerol kinase)\n\nWe only extract the leaf level labels, here EC:2.7.1.30, corresponding to glycerol kinase."
] |
dc89ed1cc59ea92c19958c81c58070a2e95b02ab |
# Ukrainian Wikipedia Edits
### Dataset summary
A collection of over 5M sentence edits extracted from Ukrainian Wikipedia history revisions.
Edits were filtered by edit distance and sentence length. This makes them usable for grammatical error correction (GEC) or spellchecker models pre-training.
### Supported Tasks and Leaderboards
* Ukrainian grammatical error correction (GEC) - see [UA-GEC](https://github.com/grammarly/ua-gec)
* Ukrainian spelling correction
### Languages
Ukrainian
## Dataset Structure
### Data Fields
* `src` - sentence before edit
* `tgt` - sentence after edit
### Data Splits
* `full/train` contains all the data (5,243,376 samples)
* `tiny/train` contains a 5000 examples sample.
## Dataset Creation
Latest full Ukrainian Wiki dump were used as of 2022-04-30.
It was processed with the [wikiedits](https://github.com/snukky/wikiedits) and custom scripts.
### Source Data
#### Initial Data Collection and Normalization
Wikipedia
#### Who are the source language producers?
Wikipedia writers
### Annotations
#### Annotation process
Annotations inferred by comparing two subsequent page revisions.
#### Who are the annotators?
People who edit Wikipedia pages.
### Personal and Sensitive Information
No
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The data is noisy. In addition to GEC and spelling edits, it contains a good chunk of factual changes and vandalism.
More task-specific filters could help.
## Additional Information
### Dataset Curators
[Oleksiy Syvokon](https://github.com/asivokon)
### Licensing Information
CC-BY-3.0
### Citation Information
```
@inproceedings{wiked2014,
author = {Roman Grundkiewicz and Marcin Junczys-Dowmunt},
title = {The WikEd Error Corpus: A Corpus of Corrective Wikipedia Edits and its Application to Grammatical Error Correction},
booktitle = {Advances in Natural Language Processing -- Lecture Notes in Computer Science},
editor = {Adam Przepiórkowski and Maciej Ogrodniczuk},
publisher = {Springer},
year = {2014},
volume = {8686},
pages = {478--490},
url = {http://emjotde.github.io/publications/pdf/mjd.poltal2014.draft.pdf}
}
```
### Contributions
[@snukky](https://github.com/snukky) created tools for dataset processing.
[@asivokon](https://github.com/asivokon) generated this dataset.
| osyvokon/wiki-edits-uk | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"license:cc-by-3.0",
"region:us"
] | 2022-04-30T17:21:55+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["uk-UA"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual", "translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Ukrainian Wikipedia edits "} | 2022-07-02T18:06:25+00:00 | [] | [
"uk-UA"
] | TAGS
#task_categories-other #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #license-cc-by-3.0 #region-us
|
# Ukrainian Wikipedia Edits
### Dataset summary
A collection of over 5M sentence edits extracted from Ukrainian Wikipedia history revisions.
Edits were filtered by edit distance and sentence length. This makes them usable for grammatical error correction (GEC) or spellchecker models pre-training.
### Supported Tasks and Leaderboards
* Ukrainian grammatical error correction (GEC) - see UA-GEC
* Ukrainian spelling correction
### Languages
Ukrainian
## Dataset Structure
### Data Fields
* 'src' - sentence before edit
* 'tgt' - sentence after edit
### Data Splits
* 'full/train' contains all the data (5,243,376 samples)
* 'tiny/train' contains a 5000 examples sample.
## Dataset Creation
Latest full Ukrainian Wiki dump were used as of 2022-04-30.
It was processed with the wikiedits and custom scripts.
### Source Data
#### Initial Data Collection and Normalization
Wikipedia
#### Who are the source language producers?
Wikipedia writers
### Annotations
#### Annotation process
Annotations inferred by comparing two subsequent page revisions.
#### Who are the annotators?
People who edit Wikipedia pages.
### Personal and Sensitive Information
No
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The data is noisy. In addition to GEC and spelling edits, it contains a good chunk of factual changes and vandalism.
More task-specific filters could help.
## Additional Information
### Dataset Curators
Oleksiy Syvokon
### Licensing Information
CC-BY-3.0
### Contributions
@snukky created tools for dataset processing.
@asivokon generated this dataset.
| [
"# Ukrainian Wikipedia Edits",
"### Dataset summary\n\nA collection of over 5M sentence edits extracted from Ukrainian Wikipedia history revisions.\n\nEdits were filtered by edit distance and sentence length. This makes them usable for grammatical error correction (GEC) or spellchecker models pre-training.",
"### Supported Tasks and Leaderboards\n\n* Ukrainian grammatical error correction (GEC) - see UA-GEC\n* Ukrainian spelling correction",
"### Languages\n\nUkrainian",
"## Dataset Structure",
"### Data Fields\n\n* 'src' - sentence before edit\n* 'tgt' - sentence after edit",
"### Data Splits\n\n* 'full/train' contains all the data (5,243,376 samples)\n* 'tiny/train' contains a 5000 examples sample.",
"## Dataset Creation\n\nLatest full Ukrainian Wiki dump were used as of 2022-04-30.\n\nIt was processed with the wikiedits and custom scripts.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nWikipedia",
"#### Who are the source language producers?\n\nWikipedia writers",
"### Annotations",
"#### Annotation process\n\nAnnotations inferred by comparing two subsequent page revisions.",
"#### Who are the annotators?\n\nPeople who edit Wikipedia pages.",
"### Personal and Sensitive Information\n\nNo",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\nThe data is noisy. In addition to GEC and spelling edits, it contains a good chunk of factual changes and vandalism.\n\nMore task-specific filters could help.",
"## Additional Information",
"### Dataset Curators\n\nOleksiy Syvokon",
"### Licensing Information\n\nCC-BY-3.0",
"### Contributions\n\n@snukky created tools for dataset processing.\n\n@asivokon generated this dataset."
] | [
"TAGS\n#task_categories-other #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #license-cc-by-3.0 #region-us \n",
"# Ukrainian Wikipedia Edits",
"### Dataset summary\n\nA collection of over 5M sentence edits extracted from Ukrainian Wikipedia history revisions.\n\nEdits were filtered by edit distance and sentence length. This makes them usable for grammatical error correction (GEC) or spellchecker models pre-training.",
"### Supported Tasks and Leaderboards\n\n* Ukrainian grammatical error correction (GEC) - see UA-GEC\n* Ukrainian spelling correction",
"### Languages\n\nUkrainian",
"## Dataset Structure",
"### Data Fields\n\n* 'src' - sentence before edit\n* 'tgt' - sentence after edit",
"### Data Splits\n\n* 'full/train' contains all the data (5,243,376 samples)\n* 'tiny/train' contains a 5000 examples sample.",
"## Dataset Creation\n\nLatest full Ukrainian Wiki dump were used as of 2022-04-30.\n\nIt was processed with the wikiedits and custom scripts.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nWikipedia",
"#### Who are the source language producers?\n\nWikipedia writers",
"### Annotations",
"#### Annotation process\n\nAnnotations inferred by comparing two subsequent page revisions.",
"#### Who are the annotators?\n\nPeople who edit Wikipedia pages.",
"### Personal and Sensitive Information\n\nNo",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\nThe data is noisy. In addition to GEC and spelling edits, it contains a good chunk of factual changes and vandalism.\n\nMore task-specific filters could help.",
"## Additional Information",
"### Dataset Curators\n\nOleksiy Syvokon",
"### Licensing Information\n\nCC-BY-3.0",
"### Contributions\n\n@snukky created tools for dataset processing.\n\n@asivokon generated this dataset."
] |
c517b46f5b0574f716b9e3b173deb85d4db74236 | # AutoTrain Dataset for project: company
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project company.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"sahil",
"prasad",
"president",
"www",
"swimcentre",
"com",
"banik",
"baalkrishan",
"gandhi",
"com",
"no",
"satish",
"nagar",
"hisar"
],
"tags": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
},
{
"tokens": [
"olivia",
"wilson",
"real",
"estate",
"agent",
"reallygreatsite",
"com",
"anywhere",
"st",
"any",
"city",
"st",
"www",
"reallygreatsite",
"com"
],
"tags": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(num_classes=2, names=['0', '9'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 999651 |
| valid | 499630 |
| defector/autotrain-data-company | [
"language:en",
"region:us"
] | 2022-04-30T18:34:55+00:00 | {"language": ["en"]} | 2022-10-25T09:12:59+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| AutoTrain Dataset for project: company
======================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project company.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
e8ae5ab634fd26487d0686488fd8abfb833ff283 | # Dataset Card for OSDG-CD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [OSDG-CD homepage](https://zenodo.org/record/8397907)
### Dataset Summary
The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs).
> NOTES
>
> * There are currently no examples for SDGs 16 and 17. See [this GitHub issue](https://github.com/osdg-ai/osdg-data/issues/3).
> * As of July 2023, there areexamples also for SDG 16.
### Supported Tasks and Leaderboards
TBD
### Languages
The language of the dataset is English.
## Dataset Structure
### Data Instances
For each instance, there is a string for the text, a string for the SDG, and an integer for the label.
```
{'text': 'Each section states the economic principle, reviews international good practice and discusses the situation in Brazil.',
'label': 5}
```
The average token count for the premises and hypotheses are given below:
| Feature | Mean Token Count |
| ---------- | ---------------- |
| Premise | 14.1 |
| Hypothesis | 8.3 |
### Data Fields
- `doi`: Digital Object Identifier of the original document
- `text_id`: unique text identifier
- `text`: text excerpt from the document
- `sdg`: the SDG the text is validated against
- `label`: an integer from `0` to `17` which corresponds to the `sdg` field
- `labels_negative`: the number of volunteers who rejected the suggested SDG label
- `labels_positive`: the number of volunteers who accepted the suggested SDG label
- `agreement`: agreement score based on the formula
### Data Splits
The OSDG-CD dataset has 1 splits: _train_.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 32,327 |
## Dataset Creation
### Curation Rationale
The [The OSDG Community Dataset (OSDG-CD)](https://zenodo.org/record/8397907) was developed as a benchmark for ...
with the goal of producing a dataset large enough to train models using neural methodologies.
### Source Data
#### Initial Data Collection and Normalization
TBD
#### Who are the source language producers?
TBD
### Annotations
#### Annotation process
TBD
#### Who are the annotators?
TBD
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
## Considerations for Using the Data
### Social Impact of Dataset
TBD
## Additional Information
TBD
### Dataset Curators
TBD
### Licensing Information
The OSDG Community Dataset (OSDG-CD) is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@dataset{osdg_2023_8397907,
author = {OSDG and
UNDP IICPSD SDG AI Lab and
PPMI},
title = {OSDG Community Dataset (OSDG-CD)},
month = oct,
year = 2023,
note = {{This CSV file uses UTF-8 character encoding. For
easy access on MS Excel, open the file using Data
→ From Text/CSV. Please split CSV data into
different columns by using a TAB delimiter.}},
publisher = {Zenodo},
version = {2023.10},
doi = {10.5281/zenodo.8397907},
url = {https://doi.org/10.5281/zenodo.8397907}
}
```
### Contributions
TBD
| Filippo/osdg_cd | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-30T20:54:04+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "OSDG Community Dataset (OSDG-CD)", "dataset_info": {"config_name": "main_config", "features": [{"name": "doi", "dtype": "string"}, {"name": "text_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "sdg", "dtype": "uint16"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "SDG 1", "1": "SDG 2", "2": "SDG 3", "3": "SDG 4", "4": "SDG 5", "5": "SDG 6", "6": "SDG 7", "7": "SDG 8", "8": "SDG 9", "9": "SDG 10", "10": "SDG 11", "11": "SDG 12", "12": "SDG 13", "13": "SDG 14", "14": "SDG 15", "15": "SDG 16"}}}}, {"name": "labels_negative", "dtype": "uint16"}, {"name": "labels_positive", "dtype": "uint16"}, {"name": "agreement", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 30151244, "num_examples": 42355}], "download_size": 29770590, "dataset_size": 30151244}} | 2023-10-08T08:57:13+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
| Dataset Card for OSDG-CD
========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: OSDG-CD homepage
### Dataset Summary
The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs).
>
> NOTES
>
>
> * There are currently no examples for SDGs 16 and 17. See this GitHub issue.
> * As of July 2023, there areexamples also for SDG 16.
>
>
>
### Supported Tasks and Leaderboards
TBD
### Languages
The language of the dataset is English.
Dataset Structure
-----------------
### Data Instances
For each instance, there is a string for the text, a string for the SDG, and an integer for the label.
The average token count for the premises and hypotheses are given below:
### Data Fields
* 'doi': Digital Object Identifier of the original document
* 'text\_id': unique text identifier
* 'text': text excerpt from the document
* 'sdg': the SDG the text is validated against
* 'label': an integer from '0' to '17' which corresponds to the 'sdg' field
* 'labels\_negative': the number of volunteers who rejected the suggested SDG label
* 'labels\_positive': the number of volunteers who accepted the suggested SDG label
* 'agreement': agreement score based on the formula
### Data Splits
The OSDG-CD dataset has 1 splits: *train*.
Dataset Creation
----------------
### Curation Rationale
The The OSDG Community Dataset (OSDG-CD) was developed as a benchmark for ...
with the goal of producing a dataset large enough to train models using neural methodologies.
### Source Data
#### Initial Data Collection and Normalization
TBD
#### Who are the source language producers?
TBD
### Annotations
#### Annotation process
TBD
#### Who are the annotators?
TBD
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
TBD
Additional Information
----------------------
TBD
### Dataset Curators
TBD
### Licensing Information
The OSDG Community Dataset (OSDG-CD) is licensed under a Creative Commons Attribution 4.0 International License.
### Contributions
TBD
| [
"### Dataset Summary\n\n\nThe OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs).\n\n\n\n> \n> NOTES\n> \n> \n> * There are currently no examples for SDGs 16 and 17. See this GitHub issue.\n> * As of July 2023, there areexamples also for SDG 16.\n> \n> \n>",
"### Supported Tasks and Leaderboards\n\n\nTBD",
"### Languages\n\n\nThe language of the dataset is English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the text, a string for the SDG, and an integer for the label.\n\n\nThe average token count for the premises and hypotheses are given below:",
"### Data Fields\n\n\n* 'doi': Digital Object Identifier of the original document\n* 'text\\_id': unique text identifier\n* 'text': text excerpt from the document\n* 'sdg': the SDG the text is validated against\n* 'label': an integer from '0' to '17' which corresponds to the 'sdg' field\n* 'labels\\_negative': the number of volunteers who rejected the suggested SDG label\n* 'labels\\_positive': the number of volunteers who accepted the suggested SDG label\n* 'agreement': agreement score based on the formula",
"### Data Splits\n\n\nThe OSDG-CD dataset has 1 splits: *train*.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe The OSDG Community Dataset (OSDG-CD) was developed as a benchmark for ...\nwith the goal of producing a dataset large enough to train models using neural methodologies.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nTBD",
"#### Who are the source language producers?\n\n\nTBD",
"### Annotations",
"#### Annotation process\n\n\nTBD",
"#### Who are the annotators?\n\n\nTBD",
"### Personal and Sensitive Information\n\n\nThe dataset does not contain any personal information about the authors or the crowdworkers.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nTBD\n\n\nAdditional Information\n----------------------\n\n\nTBD",
"### Dataset Curators\n\n\nTBD",
"### Licensing Information\n\n\nThe OSDG Community Dataset (OSDG-CD) is licensed under a Creative Commons Attribution 4.0 International License.",
"### Contributions\n\n\nTBD"
] | [
"TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThe OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs).\n\n\n\n> \n> NOTES\n> \n> \n> * There are currently no examples for SDGs 16 and 17. See this GitHub issue.\n> * As of July 2023, there areexamples also for SDG 16.\n> \n> \n>",
"### Supported Tasks and Leaderboards\n\n\nTBD",
"### Languages\n\n\nThe language of the dataset is English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the text, a string for the SDG, and an integer for the label.\n\n\nThe average token count for the premises and hypotheses are given below:",
"### Data Fields\n\n\n* 'doi': Digital Object Identifier of the original document\n* 'text\\_id': unique text identifier\n* 'text': text excerpt from the document\n* 'sdg': the SDG the text is validated against\n* 'label': an integer from '0' to '17' which corresponds to the 'sdg' field\n* 'labels\\_negative': the number of volunteers who rejected the suggested SDG label\n* 'labels\\_positive': the number of volunteers who accepted the suggested SDG label\n* 'agreement': agreement score based on the formula",
"### Data Splits\n\n\nThe OSDG-CD dataset has 1 splits: *train*.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe The OSDG Community Dataset (OSDG-CD) was developed as a benchmark for ...\nwith the goal of producing a dataset large enough to train models using neural methodologies.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nTBD",
"#### Who are the source language producers?\n\n\nTBD",
"### Annotations",
"#### Annotation process\n\n\nTBD",
"#### Who are the annotators?\n\n\nTBD",
"### Personal and Sensitive Information\n\n\nThe dataset does not contain any personal information about the authors or the crowdworkers.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nTBD\n\n\nAdditional Information\n----------------------\n\n\nTBD",
"### Dataset Curators\n\n\nTBD",
"### Licensing Information\n\n\nThe OSDG Community Dataset (OSDG-CD) is licensed under a Creative Commons Attribution 4.0 International License.",
"### Contributions\n\n\nTBD"
] |
4ed58453467ac729dd815696584b8fad8dac4646 |
# RAFT submissions for my-raft-submission
## Submitting to the leaderboard
To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week.
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
```python
from pathlib import Path
import pandas as pd
from collections import Counter
from datasets import load_dataset, get_dataset_config_names
tasks = get_dataset_config_names("ought/raft")
for task in tasks:
# Load dataset
raft_subset = load_dataset("ought/raft", task)
# Compute majority class over training set
counter = Counter(raft_subset["train"]["Label"])
majority_class = counter.most_common(1)[0][0]
# Load predictions file
preds = pd.read_csv(f"data/{task}/predictions.csv")
# Convert label IDs to label names
preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
# Save predictions
preds.to_csv(f"data/{task}/predictions.csv", index=False)
```
As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
```
data
├── ade_corpus_v2
│ ├── predictions.csv
│ └── task.json
├── banking_77
│ ├── predictions.csv
│ └── task.json
├── neurips_impact_statement_risks
│ ├── predictions.csv
│ └── task.json
├── one_stop_english
│ ├── predictions.csv
│ └── task.json
├── overruling
│ ├── predictions.csv
│ └── task.json
├── semiconductor_org_types
│ ├── predictions.csv
│ └── task.json
├── systematic_review_inclusion
│ ├── predictions.csv
│ └── task.json
├── tai_safety_research
│ ├── predictions.csv
│ └── task.json
├── terms_of_service
│ ├── predictions.csv
│ └── task.json
├── tweet_eval_hate
│ ├── predictions.csv
│ └── task.json
└── twitter_complaints
├── predictions.csv
└── task.json
```
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaulated on Sunday 05 September 2021 ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. | Anon126/my-raft-submission | [
"benchmark:raft",
"region:us"
] | 2022-05-01T09:48:53+00:00 | {"benchmark": "raft", "type": "prediction", "submission_name": "none"} | 2022-05-01T09:50:18+00:00 | [] | [] | TAGS
#benchmark-raft #region-us
|
# RAFT submissions for my-raft-submission
## Submitting to the leaderboard
To make a submission to the leaderboard, there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate one submission per week. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week.
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called 'URL' with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the 'data' folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
As you can see in the example, each 'URL' file should be stored in the task's subfolder in 'data' and at the end you should have something like the following:
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
If everything is correct, you should see the following message:
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
If there are no errors, you should see the following message:
where the evaluation is run every Sunday and your results will be visible on the leaderboard. | [
"# RAFT submissions for my-raft-submission",
"## Submitting to the leaderboard\n\nTo make a submission to the leaderboard, there are three main steps:\n\n1. Generate predictions on the unlabeled test set of each task\n2. Validate the predictions are compatible with the evaluation framework\n3. Push the predictions to the Hub!\n\nSee the instructions below for more details.",
"### Rules\n\n1. To prevent overfitting to the public leaderboard, we only evaluate one submission per week. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week. \n2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.\n3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.\n4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.",
"### Submission file format\n\nFor each task in RAFT, you should create a CSV file called 'URL' with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:\n\n* ID (int)\n* Label (string)\n\nSee the dummy predictions in the 'data' folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:\n\n\n\nAs you can see in the example, each 'URL' file should be stored in the task's subfolder in 'data' and at the end you should have something like the following:",
"### Validate your submission\n\nTo ensure that your submission files are correctly formatted, run the following command from the root of the repository:\n\n\n\nIf everything is correct, you should see the following message:",
"### Push your submission to the Hugging Face Hub!\n\nThe final step is to commit your files and push them to the Hub:\n\n\n\nIf there are no errors, you should see the following message:\n\n\n\nwhere the evaluation is run every Sunday and your results will be visible on the leaderboard."
] | [
"TAGS\n#benchmark-raft #region-us \n",
"# RAFT submissions for my-raft-submission",
"## Submitting to the leaderboard\n\nTo make a submission to the leaderboard, there are three main steps:\n\n1. Generate predictions on the unlabeled test set of each task\n2. Validate the predictions are compatible with the evaluation framework\n3. Push the predictions to the Hub!\n\nSee the instructions below for more details.",
"### Rules\n\n1. To prevent overfitting to the public leaderboard, we only evaluate one submission per week. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week. \n2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.\n3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.\n4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.",
"### Submission file format\n\nFor each task in RAFT, you should create a CSV file called 'URL' with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:\n\n* ID (int)\n* Label (string)\n\nSee the dummy predictions in the 'data' folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:\n\n\n\nAs you can see in the example, each 'URL' file should be stored in the task's subfolder in 'data' and at the end you should have something like the following:",
"### Validate your submission\n\nTo ensure that your submission files are correctly formatted, run the following command from the root of the repository:\n\n\n\nIf everything is correct, you should see the following message:",
"### Push your submission to the Hugging Face Hub!\n\nThe final step is to commit your files and push them to the Hub:\n\n\n\nIf there are no errors, you should see the following message:\n\n\n\nwhere the evaluation is run every Sunday and your results will be visible on the leaderboard."
] |
c3d2d2159db2fb34ca8ef05cbf96f21addeeea8b | annotations_creators:
- automatic
language_creators:
- found
languages:
- es-AR
licenses:
- cc0-1.0
multilinguality:
- monolingual
paperswithcode_id:
pretty_name: wikiner
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
license: cc
---
# Dataset Card for wikiner
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Informatio]
- **Paper:** [Learning multilingual named entity recognition from Wikipedia](https://doi.org/10.1016/j.artint.2012.03.006)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [NazaGara]([email protected])
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
This dataset was used in order to train a Spanish NER model using [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
### Languages
The only supported language is spanish (es).
## Dataset Structure
### Data Fields
The dictionary to map the id to the Label names is:
{
0: 'O',
1: 'B-PER',
2: 'I-PER',
3: 'B-ORG',
4: 'I-ORG',
5: 'B-LOC',
6: 'I-LOC',
7: 'B-MISC',
8: 'I-MISC'
}
### Data Splits
The only split is the train split.
Number of examples = 128355
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Created by Nothman et al. at 2013.
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | NazaGara/wikiner-es | [
"region:us"
] | 2022-05-01T12:57:29+00:00 | {} | 2022-08-14T14:01:57+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- automatic
language_creators:
- found
languages:
- es-AR
licenses:
- cc0-1.0
multilinguality:
- monolingual
paperswithcode_id:
pretty_name: wikiner
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
license: cc
---
# Dataset Card for wikiner
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository: [Needs More Informatio]
- Paper: Learning multilingual named entity recognition from Wikipedia
- Leaderboard:
- Point of Contact: NazaGara
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
- 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
This dataset was used in order to train a Spanish NER model using BETO.
### Languages
The only supported language is spanish (es).
## Dataset Structure
### Data Fields
The dictionary to map the id to the Label names is:
{
0: 'O',
1: 'B-PER',
2: 'I-PER',
3: 'B-ORG',
4: 'I-ORG',
5: 'B-LOC',
6: 'I-LOC',
7: 'B-MISC',
8: 'I-MISC'
}
### Data Splits
The only split is the train split.
Number of examples = 128355
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Created by Nothman et al. at 2013.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for wikiner",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: [Needs More Informatio]\n- Paper: Learning multilingual named entity recognition from Wikipedia\n- Leaderboard: \n- Point of Contact: NazaGara",
"### Dataset Summary\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .",
"### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n- 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n\nThis dataset was used in order to train a Spanish NER model using BETO.",
"### Languages\n\nThe only supported language is spanish (es).",
"## Dataset Structure",
"### Data Fields\n\nThe dictionary to map the id to the Label names is:\n{\n 0: 'O',\n 1: 'B-PER',\n 2: 'I-PER',\n 3: 'B-ORG',\n 4: 'I-ORG',\n 5: 'B-LOC',\n 6: 'I-LOC',\n 7: 'B-MISC',\n 8: 'I-MISC'\n}",
"### Data Splits\n\nThe only split is the train split.\nNumber of examples = 128355",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nCreated by Nothman et al. at 2013.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for wikiner",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: \n- Repository: [Needs More Informatio]\n- Paper: Learning multilingual named entity recognition from Wikipedia\n- Leaderboard: \n- Point of Contact: NazaGara",
"### Dataset Summary\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .",
"### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n- 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n\nThis dataset was used in order to train a Spanish NER model using BETO.",
"### Languages\n\nThe only supported language is spanish (es).",
"## Dataset Structure",
"### Data Fields\n\nThe dictionary to map the id to the Label names is:\n{\n 0: 'O',\n 1: 'B-PER',\n 2: 'I-PER',\n 3: 'B-ORG',\n 4: 'I-ORG',\n 5: 'B-LOC',\n 6: 'I-LOC',\n 7: 'B-MISC',\n 8: 'I-MISC'\n}",
"### Data Splits\n\nThe only split is the train split.\nNumber of examples = 128355",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nCreated by Nothman et al. at 2013.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
fed332421fbf9afc326f2490ec5bb49ffec180f3 |
# Dataset Card for ToxiGen
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-instances)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Sign up for Data Access
To access ToxiGen, first fill out [this form](https://forms.office.com/r/r6VXX8f8vh).
## Dataset Description
- **Repository:** https://github.com/microsoft/toxigen
- **Paper:** https://arxiv.org/abs/2203.09509
- **Point of Contact #1:** [Tom Hartvigsen]([email protected])
- **Point of Contact #2:** [Saadia Gabriel]([email protected])
### Dataset Summary
This dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in [our paper](https://arxiv.org/abs/2203.09509).
### Languages
All text is written in English.
## Dataset Structure
### Data Fields
We release TOXIGEN as a dataframe with the following fields:
- **prompt** is the prompt used for **generation**.
- **generation** is the TOXIGEN generated text.
- **generation_method** denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.
- **prompt_label** is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).
- **group** indicates the target group of the prompt.
- **roberta_prediction** is the probability predicted by our corresponding RoBERTa model for each instance.
### Citation Information
```bibtex
@inproceedings{hartvigsen2022toxigen,
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022}
}
```
| skg/toxigen-data | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"arxiv:2203.09509",
"region:us"
] | 2022-05-01T14:49:02+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "ToxiGen", "languages": ["en-US"], "licenses": []} | 2022-06-20T10:12:11+00:00 | [
"2203.09509"
] | [] | TAGS
#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #arxiv-2203.09509 #region-us
|
# Dataset Card for ToxiGen
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Fields
- Additional Information
- Citation Information
## Sign up for Data Access
To access ToxiGen, first fill out this form.
## Dataset Description
- Repository: URL
- Paper: URL
- Point of Contact #1: Tom Hartvigsen
- Point of Contact #2: Saadia Gabriel
### Dataset Summary
This dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in our paper.
### Languages
All text is written in English.
## Dataset Structure
### Data Fields
We release TOXIGEN as a dataframe with the following fields:
- prompt is the prompt used for generation.
- generation is the TOXIGEN generated text.
- generation_method denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.
- prompt_label is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).
- group indicates the target group of the prompt.
- roberta_prediction is the probability predicted by our corresponding RoBERTa model for each instance.
| [
"# Dataset Card for ToxiGen",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Fields\n- Additional Information\n - Citation Information",
"## Sign up for Data Access\nTo access ToxiGen, first fill out this form.",
"## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact #1: Tom Hartvigsen\n- Point of Contact #2: Saadia Gabriel",
"### Dataset Summary\n\nThis dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in our paper.",
"### Languages\n\nAll text is written in English.",
"## Dataset Structure",
"### Data Fields\n\nWe release TOXIGEN as a dataframe with the following fields:\n- prompt is the prompt used for generation.\n- generation is the TOXIGEN generated text.\n- generation_method denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.\n- prompt_label is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).\n- group indicates the target group of the prompt.\n- roberta_prediction is the probability predicted by our corresponding RoBERTa model for each instance."
] | [
"TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #arxiv-2203.09509 #region-us \n",
"# Dataset Card for ToxiGen",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Fields\n- Additional Information\n - Citation Information",
"## Sign up for Data Access\nTo access ToxiGen, first fill out this form.",
"## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact #1: Tom Hartvigsen\n- Point of Contact #2: Saadia Gabriel",
"### Dataset Summary\n\nThis dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in our paper.",
"### Languages\n\nAll text is written in English.",
"## Dataset Structure",
"### Data Fields\n\nWe release TOXIGEN as a dataframe with the following fields:\n- prompt is the prompt used for generation.\n- generation is the TOXIGEN generated text.\n- generation_method denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.\n- prompt_label is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).\n- group indicates the target group of the prompt.\n- roberta_prediction is the probability predicted by our corresponding RoBERTa model for each instance."
] |
b52b6486b38d74ddaf95626b15e0f0c50fa5e959 |
# Dataset Card for WIT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
- **Repository:** [WIT repository](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)
- **Leaderboard:** [WIT leaderboard](https://www.kaggle.com/c/wikipedia-image-caption)
- **Point of Contact:** [WIT e-mail](mailto:[email protected])
### Dataset Summary
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
A few unique advantages of WIT:
* The largest multimodal dataset (time of this writing) by the number of image-text examples.
* A massively multilingual (first of its kind) with coverage for over 100+ languages.
* A collection of diverse set of concepts and real world entities.
* Brings forth challenging real-world test sets.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": get_datasets_user_agent()},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("wit")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- `text-retrieval`: The goal in this task is to build a model that retrieves the text closest to an image.
In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages, with the following stats:
Image-Text | # Lang | Uniq. Images | # Lang
------------ | ------ | ------------- | ------
total > 1M | 9 | images > 1M | 6
total > 500K | 10 | images > 500K | 12
total > 100K | 36 | images > 100K | 35
total > 50K | 15 | images > 50K | 17
total > 14K | 38 | images > 13K | 38
## Dataset Structure
### Data Instances
```
{
'language': 'en',
'page_url': 'https://en.wikipedia.org/wiki/Oxydactylus',
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/5/5f/Oxydactylus_longipes_fm.jpg',
'page_title': 'Oxydactylus',
'section_title': None,
'hierarchical_section_title': 'Oxydactylus',
'caption_reference_description': None,
'caption_attribution_description': 'English: Mounted skeleton of Oxydactylus longipes in the Field Museum of Natural History.',
'caption_alt_text_description': None,
'mime_type': 'image/jpeg',
'original_height': 3564,
'original_width': 2748,
'is_main_image': True,
'attribution_passes_lang_id': True,
'page_changed_recently': True,
'context_page_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene, existing for approximately 14 million years. The name is from the Ancient Greek οξύς and δάκτυλος.\nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.',
'context_section_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene (28.4–13.7 mya), existing for approximately 14 million years. The name is from the Ancient Greek οξύς (oxys, "sharp")and δάκτυλος (daktylos, "finger").\n \nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.'
}
```
### Data Fields
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `image_url`: URL to wikipedia image
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
- `hierarchical_section_title`: Hierarchical section's title
- `caption_reference_description`: This is the caption that is visible on the wiki page directly below the image.
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `mime_type`: Mime type associated to the image.
- `original_height`: Image height
- `original_width`: Image width
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description).
- `page_changed_recently`: [More Information Needed]
- `context_page_description`: Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section.
<p align='center'>
<img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br>
<b>Figure: WIT annotation example. </b>
</p>
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 37046386 rows.
## Dataset Creation
### Curation Rationale
From the [repository](https://github.com/google-research-datasets/wit#motivation):
> Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.
>
> To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.
>
> The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
the image, "Maybe" if it is sufficiently explanatory and "No" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21), [@nateraw](https://github.com/nateraw) and [hassiahk](https://github.com/hassiahk) for adding this dataset. | google/wit | [
"task_categories:text-retrieval",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"source_datasets:extended|wikipedia",
"language:af",
"language:ar",
"language:ast",
"language:azb",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:it",
"language:iw",
"language:ja",
"language:ka",
"language:ko",
"language:la",
"language:lt",
"language:lv",
"language:mk",
"language:ml",
"language:ms",
"language:nl",
"language:nn",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:th",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:vo",
"language:zh",
"license:cc-by-sa-3.0",
"arxiv:2103.01913",
"region:us"
] | 2022-05-02T10:22:32+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["af", "ar", "ast", "azb", "be", "bg", "bn", "br", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "hr", "hu", "hy", "id", "it", "iw", "ja", "ka", "ko", "la", "lt", "lv", "mk", "ml", "ms", "nl", "nn", "no", "pl", "pt", "ro", "ru", "sk", "sl", "sr", "sv", "th", "tr", "uk", "ur", "vi", "vo", "zh"], "license": ["cc-by-sa-3.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original", "extended|wikipedia"], "task_categories": ["text-retrieval", "image-to-text"], "task_ids": ["text-retrieval-other-text-image-retrieval", "image-captioning"], "paperswithcode_id": "wit", "pretty_name": "Wikipedia-based Image Text"} | 2022-07-04T09:47:07+00:00 | [
"2103.01913"
] | [
"af",
"ar",
"ast",
"azb",
"be",
"bg",
"bn",
"br",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gl",
"hr",
"hu",
"hy",
"id",
"it",
"iw",
"ja",
"ka",
"ko",
"la",
"lt",
"lv",
"mk",
"ml",
"ms",
"nl",
"nn",
"no",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sr",
"sv",
"th",
"tr",
"uk",
"ur",
"vi",
"vo",
"zh"
] | TAGS
#task_categories-text-retrieval #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #source_datasets-extended|wikipedia #language-Afrikaans #language-Arabic #language-Asturian #language-South Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-iw #language-Japanese #language-Georgian #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Macedonian #language-Malayalam #language-Malay (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Serbian #language-Swedish #language-Thai #language-Turkish #language-Ukrainian #language-Urdu #language-Vietnamese #language-Volapük #language-Chinese #license-cc-by-sa-3.0 #arxiv-2103.01913 #region-us
| Dataset Card for WIT
====================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Dataset Preprocessing
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: WIT homepage
* Repository: WIT repository
* Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
* Leaderboard: WIT leaderboard
* Point of Contact: WIT e-mail
### Dataset Summary
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
A few unique advantages of WIT:
* The largest multimodal dataset (time of this writing) by the number of image-text examples.
* A massively multilingual (first of its kind) with coverage for over 100+ languages.
* A collection of diverse set of concepts and real world entities.
* Brings forth challenging real-world test sets.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
### Supported Tasks and Leaderboards
* 'image-captioning': This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
* 'text-retrieval': The goal in this task is to build a model that retrieves the text closest to an image.
In these tasks, any combination of the 'caption\_reference\_description', 'caption\_attribution\_description' and 'caption\_alt\_text\_description' fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages, with the following stats:
Dataset Structure
-----------------
### Data Instances
### Data Fields
* 'language': Language code depicting wikipedia language of the page
* 'page\_url': URL to wikipedia page
* 'image\_url': URL to wikipedia image
* 'page\_title': Wikipedia page's title
* 'section\_title': Section's title
* 'hierarchical\_section\_title': Hierarchical section's title
* 'caption\_reference\_description': This is the caption that is visible on the wiki page directly below the image.
* 'caption\_attribution\_description': This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.
* 'caption\_alt\_text\_description': This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
* 'mime\_type': Mime type associated to the image.
* 'original\_height': Image height
* 'original\_width': Image width
* 'is\_main\_image': Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
* 'attribution\_passes\_lang\_id': Compared 'language' field with the attribution language (written in the prefix of the attribution description).
* 'page\_changed\_recently':
* 'context\_page\_description': Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
* 'context\_section\_description': Text within the image's section.

Dataset Creation
----------------
### Curation Rationale
From the repository:
>
> Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.
>
>
> To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.
>
>
> The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).
>
>
>
### Source Data
#### Initial Data Collection and Normalization
From the paper, section 3.1:
>
> We started with all Wikipedia content pages (i.e., ignoring other
> pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.
>
>
>
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the paper, section 3.7:
>
> To further verify the quality of the WIT dataset we performed a
> study using (crowd-sourced) human annotators. As seen in Fig. 3,
> we asked raters to answer 3 questions. Given an image and the page
> title, raters first evaluate the quality of the attribution description
> and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
> text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
> the image, "Maybe" if it is sufficiently explanatory and "No" if it is
> irrelevant or the image is inappropriate.
>
>
>
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
From the paper, section 3.4:
>
> Lastly we found that certain image-text pairs occurred very
> frequently. These were often generic images that did not have
> much to do with the main article page. Common examples
> included flags, logos, maps, insignia and such. To prevent
> biasing the data, we heavily under-sampled all such images
>
>
>
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @thomasw21, @nateraw and hassiahk for adding this dataset.
| [
"### Dataset Summary\n\n\nWikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n\n\nA few unique advantages of WIT:\n\n\n* The largest multimodal dataset (time of this writing) by the number of image-text examples.\n* A massively multilingual (first of its kind) with coverage for over 100+ languages.\n* A collection of diverse set of concepts and real world entities.\n* Brings forth challenging real-world test sets.",
"### Dataset Preprocessing\n\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:",
"### Supported Tasks and Leaderboards\n\n\n* 'image-captioning': This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.\n* 'text-retrieval': The goal in this task is to build a model that retrieves the text closest to an image.\n\n\nIn these tasks, any combination of the 'caption\\_reference\\_description', 'caption\\_attribution\\_description' and 'caption\\_alt\\_text\\_description' fields can be used as the input text/caption.",
"### Languages\n\n\nThe dataset contains examples from all Wikipedia languages, with the following stats:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'language': Language code depicting wikipedia language of the page\n* 'page\\_url': URL to wikipedia page\n* 'image\\_url': URL to wikipedia image\n* 'page\\_title': Wikipedia page's title\n* 'section\\_title': Section's title\n* 'hierarchical\\_section\\_title': Hierarchical section's title\n* 'caption\\_reference\\_description': This is the caption that is visible on the wiki page directly below the image.\n* 'caption\\_attribution\\_description': This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.\n* 'caption\\_alt\\_text\\_description': This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers\n* 'mime\\_type': Mime type associated to the image.\n* 'original\\_height': Image height\n* 'original\\_width': Image width\n* 'is\\_main\\_image': Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.\n* 'attribution\\_passes\\_lang\\_id': Compared 'language' field with the attribution language (written in the prefix of the attribution description).\n* 'page\\_changed\\_recently':\n* 'context\\_page\\_description': Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.\n* 'context\\_section\\_description': Text within the image's section.\n\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFrom the repository:\n\n\n\n> \n> Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.\n> \n> \n> To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.\n> \n> \n> The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nFrom the paper, section 3.1:\n\n\n\n> \n> We started with all Wikipedia content pages (i.e., ignoring other\n> pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.\n> \n> \n>",
"#### Who are the source language producers?\n\n\nText was extracted from Wikipedia.",
"### Annotations",
"#### Annotation process\n\n\nWIT was constructed using an automatic process. However it was human-validated.\n\n\nFrom the paper, section 3.7:\n\n\n\n> \n> To further verify the quality of the WIT dataset we performed a\n> study using (crowd-sourced) human annotators. As seen in Fig. 3,\n> we asked raters to answer 3 questions. Given an image and the page\n> title, raters first evaluate the quality of the attribution description\n> and reference description in the first two questions (order randomized). The third question understands the contextual quality of these\n> text descriptions given the page description and caption. Each response is on a 3-point scale: \"Yes\" if the text perfectly describes\n> the image, \"Maybe\" if it is sufficiently explanatory and \"No\" if it is\n> irrelevant or the image is inappropriate.\n> \n> \n>",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases\n\n\nFrom the paper, section 3.4:\n\n\n\n> \n> Lastly we found that certain image-text pairs occurred very\n> frequently. These were often generic images that did not have\n> much to do with the main article page. Common examples\n> included flags, logos, maps, insignia and such. To prevent\n> biasing the data, we heavily under-sampled all such images\n> \n> \n>",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @thomasw21, @nateraw and hassiahk for adding this dataset."
] | [
"TAGS\n#task_categories-text-retrieval #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #source_datasets-extended|wikipedia #language-Afrikaans #language-Arabic #language-Asturian #language-South Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-iw #language-Japanese #language-Georgian #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Macedonian #language-Malayalam #language-Malay (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Serbian #language-Swedish #language-Thai #language-Turkish #language-Ukrainian #language-Urdu #language-Vietnamese #language-Volapük #language-Chinese #license-cc-by-sa-3.0 #arxiv-2103.01913 #region-us \n",
"### Dataset Summary\n\n\nWikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n\n\nA few unique advantages of WIT:\n\n\n* The largest multimodal dataset (time of this writing) by the number of image-text examples.\n* A massively multilingual (first of its kind) with coverage for over 100+ languages.\n* A collection of diverse set of concepts and real world entities.\n* Brings forth challenging real-world test sets.",
"### Dataset Preprocessing\n\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:",
"### Supported Tasks and Leaderboards\n\n\n* 'image-captioning': This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.\n* 'text-retrieval': The goal in this task is to build a model that retrieves the text closest to an image.\n\n\nIn these tasks, any combination of the 'caption\\_reference\\_description', 'caption\\_attribution\\_description' and 'caption\\_alt\\_text\\_description' fields can be used as the input text/caption.",
"### Languages\n\n\nThe dataset contains examples from all Wikipedia languages, with the following stats:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'language': Language code depicting wikipedia language of the page\n* 'page\\_url': URL to wikipedia page\n* 'image\\_url': URL to wikipedia image\n* 'page\\_title': Wikipedia page's title\n* 'section\\_title': Section's title\n* 'hierarchical\\_section\\_title': Hierarchical section's title\n* 'caption\\_reference\\_description': This is the caption that is visible on the wiki page directly below the image.\n* 'caption\\_attribution\\_description': This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.\n* 'caption\\_alt\\_text\\_description': This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers\n* 'mime\\_type': Mime type associated to the image.\n* 'original\\_height': Image height\n* 'original\\_width': Image width\n* 'is\\_main\\_image': Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.\n* 'attribution\\_passes\\_lang\\_id': Compared 'language' field with the attribution language (written in the prefix of the attribution description).\n* 'page\\_changed\\_recently':\n* 'context\\_page\\_description': Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.\n* 'context\\_section\\_description': Text within the image's section.\n\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFrom the repository:\n\n\n\n> \n> Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.\n> \n> \n> To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.\n> \n> \n> The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nFrom the paper, section 3.1:\n\n\n\n> \n> We started with all Wikipedia content pages (i.e., ignoring other\n> pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.\n> \n> \n>",
"#### Who are the source language producers?\n\n\nText was extracted from Wikipedia.",
"### Annotations",
"#### Annotation process\n\n\nWIT was constructed using an automatic process. However it was human-validated.\n\n\nFrom the paper, section 3.7:\n\n\n\n> \n> To further verify the quality of the WIT dataset we performed a\n> study using (crowd-sourced) human annotators. As seen in Fig. 3,\n> we asked raters to answer 3 questions. Given an image and the page\n> title, raters first evaluate the quality of the attribution description\n> and reference description in the first two questions (order randomized). The third question understands the contextual quality of these\n> text descriptions given the page description and caption. Each response is on a 3-point scale: \"Yes\" if the text perfectly describes\n> the image, \"Maybe\" if it is sufficiently explanatory and \"No\" if it is\n> irrelevant or the image is inappropriate.\n> \n> \n>",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases\n\n\nFrom the paper, section 3.4:\n\n\n\n> \n> Lastly we found that certain image-text pairs occurred very\n> frequently. These were often generic images that did not have\n> much to do with the main article page. Common examples\n> included flags, logos, maps, insignia and such. To prevent\n> biasing the data, we heavily under-sampled all such images\n> \n> \n>",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @thomasw21, @nateraw and hassiahk for adding this dataset."
] |
82895de1f6c69a73fd9c4fdb2d6adff0a8fbc4b4 |
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:[email protected])
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the image captioning task. It includes data from 351 languages across 31 language families. There is a mean of 32 stories and 319 image-caption pairs per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 351 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul
## Dataset Statistics
Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following statistics:
| ISO 639-3 | stories | image-caption pairs |
|:------------|-----------:|-----------------------:|
| ahk | 101 | 907 |
| awa | 163 | 1200 |
| bam | 4 | 86 |
| ben | 251 | 2235 |
| bho | 173 | 1172 |
| boz | 5 | 102 |
| bzi | 66 | 497 |
| cak | 67 | 817 |
| ceb | 418 | 2953 |
| cgc | 197 | 1638 |
| chd | 1 | 84 |
| dty | 172 | 1310 |
| eng | 2633 | 28618 |
| fas | 129 | 631 |
| fra | 403 | 5278 |
| hat | 260 | 2411 |
| hau | 256 | 1865 |
| hbb | 27 | 273 |
| ind | 259 | 2177 |
| jra | 139 | 1423 |
| kak | 195 | 1416 |
| kan | 21 | 168 |
| kek | 36 | 621 |
| kir | 382 | 4026 |
| kjb | 102 | 984 |
| kor | 132 | 2773 |
| mai | 180 | 1211 |
| mai | 180 | 1211 |
| mam | 134 | 1317 |
| mhx | 98 | 945 |
| mya | 38 | 421 |
| myk | 34 | 341 |
| nep | 200 | 1507 |
| new | 177 | 1225 |
| por | 163 | 3101 |
| quc | 99 | 817 |
| rus | 353 | 3933 |
| sdk | 11 | 153 |
| snk | 35 | 356 |
| spa | 528 | 6111 |
| stk | 7 | 113 |
| tgl | 0 | 0 |
| tha | 285 | 3023 |
| thl | 185 | 1464 |
| tpi | 201 | 2162 |
## Dataset Structure
### Data Instances
The examples look like this for Hausa:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset("sil-ai/bloom-captioning", iso639_3_letter_code,
use_auth_token=True, download_mode='force_redownload')
# An entry in the dataset consists of a image caption along with
# a link to the corresponding image (and various pieces of metadata).
print(dataset['train'][0])
```
This would produce an output:
```
{'image_id': '5e7e2ab6-493f-4430-a635-695fbff76cf0',
'image_url': 'https://bloom-vist.s3.amazonaws.com/%E0%A4%AF%E0%A5%87%E0%A4%B8%E0%A5%81%20%E0%A4%9A%E0%A5%81%E0%A4%B5%E0%A4%BE%20%E0%A4%89%E0%A4%A0%E0%A5%81%E0%A4%99%E0%A5%8D%E2%80%8C%E0%A4%99%E0%A4%BF%20%E0%A4%B2%E0%A4%BE%E0%A4%AE%E0%A5%8D%E2%80%8C%E0%A4%9F%E0%A4%BF%E0%A4%AF%E0%A4%BE%E0%A4%A8%E0%A4%BE/image2.jpg',
'caption': 'Lokacinan almajiran suna tuƙa jirgin ruwansu, amma can cikin dare sun kai tsakiyar tafkin kaɗai. Suna tuƙi da wahala saboda iska tana busawa da ƙarfi gaba da su.',
'story_id': 'cd17125d-66c6-467c-b6c3-7463929faff9',
'album_id': 'a3074fc4-b88f-4769-a6de-dc952fdb35f0',
'original_bloom_language_tag': 'ha',
'index_in_story': 0}
```
To download all of the images locally directory `images`, you can do something similar to the following:
```
from PIL import Image
import urllib
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
if 'png' in image_url:
png = Image.open(io.BytesIO(req.read())).convert('RGBA')
png.load() # required for png.split()
background = Image.new("RGB", png.size, (255, 255, 255))
background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
image_id = str(uuid.uuid4())
image_path = "images/" + image_id + ".jpg"
background.save(image_path, 'JPEG', quality=80)
else:
image = Image.open(io.BytesIO(req.read()))
image_id = str(uuid.uuid4())
image_path = "images/" + image_id + ".jpg"
image.save(image_path)
return image_path
def fetch_images(batch, num_threads, timeout=None, retries=3):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image_path"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dataset = dataset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Data Fields
The metadata fields below are available:
- **image_id**: a unique ID for the image
- **image_url**: a link for downloading the image
- **caption**: a caption corresponding to the image
- **story_id**: a unique ID for the corresponding story in which the caption appears
- **album_id**: a unique ID for the corresponding album in which the image appears
- **original_bloom_language_tag**: the original language identification from the Bloom library
- **index_in_story**: an index corresponding to the order of the image-caption pair in the corresponding story
### Data Splits
All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.
**NOTE:** The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder `<hidden>` token.
## Changelog
- **25 October 2022** - Initial release
- **25 October 2022** - Update to include licenses on each data item.
| sil-ai/bloom-captioning | [
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:afr",
"language:af",
"language:aaa",
"language:abc",
"language:ada",
"language:adq",
"language:aeu",
"language:agq",
"language:ags",
"language:ahk",
"language:aia",
"language:ajz",
"language:aka",
"language:ak",
"language:ame",
"language:amh",
"language:am",
"language:amp",
"language:amu",
"language:ann",
"language:aph",
"language:awa",
"language:awb",
"language:azn",
"language:azo",
"language:bag",
"language:bam",
"language:bm",
"language:baw",
"language:bax",
"language:bbk",
"language:bcc",
"language:bce",
"language:bec",
"language:bef",
"language:ben",
"language:bn",
"language:bfd",
"language:bfm",
"language:bfn",
"language:bgf",
"language:bho",
"language:bhs",
"language:bis",
"language:bi",
"language:bjn",
"language:bjr",
"language:bkc",
"language:bkh",
"language:bkm",
"language:bkx",
"language:bob",
"language:bod",
"language:bo",
"language:boz",
"language:bqm",
"language:bra",
"language:brb",
"language:bri",
"language:brv",
"language:bss",
"language:bud",
"language:buo",
"language:bwt",
"language:bwx",
"language:bxa",
"language:bya",
"language:bze",
"language:bzi",
"language:cak",
"language:cbr",
"language:ceb",
"language:cgc",
"language:chd",
"language:chp",
"language:cim",
"language:clo",
"language:cmn",
"language:zh",
"language:cmo",
"language:csw",
"language:cuh",
"language:cuv",
"language:dag",
"language:ddg",
"language:ded",
"language:deu",
"language:de",
"language:dig",
"language:dje",
"language:dmg",
"language:dnw",
"language:dtp",
"language:dtr",
"language:dty",
"language:dug",
"language:eee",
"language:ekm",
"language:enb",
"language:enc",
"language:eng",
"language:en",
"language:ewo",
"language:fas",
"language:fa",
"language:fil",
"language:fli",
"language:fon",
"language:fra",
"language:fr",
"language:fub",
"language:fuh",
"language:gal",
"language:gbj",
"language:gou",
"language:gsw",
"language:guc",
"language:guj",
"language:gu",
"language:guz",
"language:gwc",
"language:hao",
"language:hat",
"language:ht",
"language:hau",
"language:ha",
"language:hbb",
"language:hig",
"language:hil",
"language:hin",
"language:hi",
"language:hla",
"language:hna",
"language:hre",
"language:hro",
"language:idt",
"language:ilo",
"language:ind",
"language:id",
"language:ino",
"language:isu",
"language:ita",
"language:it",
"language:jgo",
"language:jmx",
"language:jpn",
"language:ja",
"language:jra",
"language:kak",
"language:kam",
"language:kan",
"language:kn",
"language:kau",
"language:kr",
"language:kbq",
"language:kbx",
"language:kby",
"language:kek",
"language:ken",
"language:khb",
"language:khm",
"language:km",
"language:kik",
"language:ki",
"language:kin",
"language:rw",
"language:kir",
"language:ky",
"language:kjb",
"language:kmg",
"language:kmr",
"language:ku",
"language:kms",
"language:kmu",
"language:kor",
"language:ko",
"language:kqr",
"language:krr",
"language:ksw",
"language:kur",
"language:kvt",
"language:kwd",
"language:kwu",
"language:kwx",
"language:kxp",
"language:kyq",
"language:laj",
"language:lan",
"language:lao",
"language:lo",
"language:lbr",
"language:lfa",
"language:lgg",
"language:lgr",
"language:lhm",
"language:lhu",
"language:lkb",
"language:llg",
"language:lmp",
"language:lns",
"language:loh",
"language:lsi",
"language:lts",
"language:lug",
"language:lg",
"language:luy",
"language:lwl",
"language:mai",
"language:mal",
"language:ml",
"language:mam",
"language:mar",
"language:mr",
"language:mdr",
"language:mfh",
"language:mfj",
"language:mgg",
"language:mgm",
"language:mgo",
"language:mgq",
"language:mhx",
"language:miy",
"language:mkz",
"language:mle",
"language:mlk",
"language:mlw",
"language:mmu",
"language:mne",
"language:mnf",
"language:mnw",
"language:mot",
"language:mqj",
"language:mrn",
"language:mry",
"language:msb",
"language:muv",
"language:mve",
"language:mxu",
"language:mya",
"language:my",
"language:myk",
"language:myx",
"language:mzm",
"language:nas",
"language:nco",
"language:nep",
"language:ne",
"language:new",
"language:nge",
"language:ngn",
"language:nhx",
"language:njy",
"language:nla",
"language:nld",
"language:nl",
"language:nlv",
"language:nod",
"language:nsk",
"language:nsn",
"language:nso",
"language:nst",
"language:nuj",
"language:nwe",
"language:nwi",
"language:nxa",
"language:nxl",
"language:nya",
"language:ny",
"language:nyo",
"language:nyu",
"language:nza",
"language:odk",
"language:oji",
"language:oj",
"language:oki",
"language:omw",
"language:ori",
"language:or",
"language:ozm",
"language:pae",
"language:pag",
"language:pan",
"language:pa",
"language:pbt",
"language:pce",
"language:pcg",
"language:pdu",
"language:pea",
"language:pex",
"language:pis",
"language:pkb",
"language:pmf",
"language:pnz",
"language:por",
"language:pt",
"language:psp",
"language:pwg",
"language:qaa",
"language:qub",
"language:quc",
"language:quf",
"language:quz",
"language:qve",
"language:qvh",
"language:qvm",
"language:qvo",
"language:qxh",
"language:rel",
"language:rnl",
"language:ron",
"language:ro",
"language:roo",
"language:rue",
"language:rug",
"language:rus",
"language:ru",
"language:san",
"language:sa",
"language:saq",
"language:sat",
"language:sdk",
"language:sea",
"language:sgd",
"language:shn",
"language:sml",
"language:snk",
"language:snl",
"language:som",
"language:so",
"language:sot",
"language:st",
"language:sox",
"language:spa",
"language:es",
"language:sps",
"language:ssn",
"language:stk",
"language:swa",
"language:sw",
"language:swh",
"language:sxb",
"language:syw",
"language:taj",
"language:tam",
"language:ta",
"language:tbj",
"language:tdb",
"language:tdg",
"language:tdt",
"language:teo",
"language:tet",
"language:tgk",
"language:tg",
"language:tha",
"language:th",
"language:the",
"language:thk",
"language:thl",
"language:thy",
"language:tio",
"language:tkd",
"language:tnl",
"language:tnn",
"language:tnp",
"language:tnt",
"language:tod",
"language:tom",
"language:tpi",
"language:tpl",
"language:tpu",
"language:tsb",
"language:tsn",
"language:tn",
"language:tso",
"language:ts",
"language:tuv",
"language:tuz",
"language:tvs",
"language:udg",
"language:unr",
"language:urd",
"language:ur",
"language:uzb",
"language:uz",
"language:ven",
"language:ve",
"language:vie",
"language:vi",
"language:vif",
"language:war",
"language:wbm",
"language:wbr",
"language:wms",
"language:wni",
"language:wnk",
"language:wtk",
"language:xho",
"language:xh",
"language:xkg",
"language:xmd",
"language:xmg",
"language:xmm",
"language:xog",
"language:xty",
"language:yas",
"language:yav",
"language:ybb",
"language:ybh",
"language:ybi",
"language:ydd",
"language:yea",
"language:yet",
"language:yid",
"language:yi",
"language:yin",
"language:ymp",
"language:zaw",
"language:zho",
"language:zlm",
"language:zuh",
"language:zul",
"language:zu",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-05-02T13:24:03+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["afr", "af", "aaa", "abc", "ada", "adq", "aeu", "agq", "ags", "ahk", "aia", "ajz", "aka", "ak", "ame", "amh", "am", "amp", "amu", "ann", "aph", "awa", "awb", "azn", "azo", "bag", "bam", "bm", "baw", "bax", "bbk", "bcc", "bce", "bec", "bef", "ben", "bn", "bfd", "bfm", "bfn", "bgf", "bho", "bhs", "bis", "bi", "bjn", "bjr", "bkc", "bkh", "bkm", "bkx", "bob", "bod", "bo", "boz", "bqm", "bra", "brb", "bri", "brv", "bss", "bud", "buo", "bwt", "bwx", "bxa", "bya", "bze", "bzi", "cak", "cbr", "ceb", "cgc", "chd", "chp", "cim", "clo", "cmn", "zh", "cmo", "csw", "cuh", "cuv", "dag", "ddg", "ded", "deu", "de", "dig", "dje", "dmg", "dnw", "dtp", "dtr", "dty", "dug", "eee", "ekm", "enb", "enc", "eng", "en", "ewo", "fas", "fa", "fil", "fli", "fon", "fra", "fr", "fub", "fuh", "gal", "gbj", "gou", "gsw", "guc", "guj", "gu", "guz", "gwc", "hao", "hat", "ht", "hau", "ha", "hbb", "hig", "hil", "hin", "hi", "hla", "hna", "hre", "hro", "idt", "ilo", "ind", "id", "ino", "isu", "ita", "it", "jgo", "jmx", "jpn", "ja", "jra", "kak", "kam", "kan", "kn", "kau", "kr", "kbq", "kbx", "kby", "kek", "ken", "khb", "khm", "km", "kik", "ki", "kin", "rw", "kir", "ky", "kjb", "kmg", "kmr", "ku", "kms", "kmu", "kor", "ko", "kqr", "krr", "ksw", "kur", "ku", "kvt", "kwd", "kwu", "kwx", "kxp", "kyq", "laj", "lan", "lao", "lo", "lbr", "lfa", "lgg", "lgr", "lhm", "lhu", "lkb", "llg", "lmp", "lns", "loh", "lsi", "lts", "lug", "lg", "luy", "lwl", "mai", "mal", "ml", "mam", "mar", "mr", "mdr", "mfh", "mfj", "mgg", "mgm", "mgo", "mgq", "mhx", "miy", "mkz", "mle", "mlk", "mlw", "mmu", "mne", "mnf", "mnw", "mot", "mqj", "mrn", "mry", "msb", "muv", "mve", "mxu", "mya", "my", "myk", "myx", "mzm", "nas", "nco", "nep", "ne", "new", "nge", "ngn", "nhx", "njy", "nla", "nld", "nl", "nlv", "nod", "nsk", "nsn", "nso", "nst", "nuj", "nwe", "nwi", "nxa", "nxl", "nya", "ny", "nyo", "nyu", "nza", "odk", "oji", "oj", "oki", "omw", "ori", "or", "ozm", "pae", "pag", "pan", "pa", "pbt", "pce", "pcg", "pdu", "pea", "pex", "pis", "pkb", "pmf", "pnz", "por", "pt", "psp", "pwg", "qaa", "qub", "quc", "quf", "quz", "qve", "qvh", "qvm", "qvo", "qxh", "rel", "rnl", "ron", "ro", "roo", "rue", "rug", "rus", "ru", "san", "sa", "saq", "sat", "sdk", "sea", "sgd", "shn", "sml", "snk", "snl", "som", "so", "sot", "st", "sox", "spa", "es", "sps", "ssn", "stk", "swa", "sw", "swh", "sxb", "syw", "taj", "tam", "ta", "tbj", "tdb", "tdg", "tdt", "teo", "tet", "tgk", "tg", "tha", "th", "the", "thk", "thl", "thy", "tio", "tkd", "tnl", "tnn", "tnp", "tnt", "tod", "tom", "tpi", "tpl", "tpu", "tsb", "tsn", "tn", "tso", "ts", "tuv", "tuz", "tvs", "udg", "unr", "urd", "ur", "uzb", "uz", "ven", "ve", "vie", "vi", "vif", "war", "wbm", "wbr", "wms", "wni", "wnk", "wtk", "xho", "xh", "xkg", "xmd", "xmg", "xmm", "xog", "xty", "yas", "yav", "ybb", "ybh", "ybi", "ydd", "yea", "yet", "yid", "yi", "yin", "ymp", "zaw", "zho", "zh", "zlm", "zuh", "zul", "zu"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_ids": ["image-captioning"], "pretty_name": "BloomCaptioning", "extra_gated_prompt": "One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled explicitly with a `cc-by-sa` license). A \"license\" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.\n\nThese [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that: \n\n1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:[email protected]).\n2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. \n3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material. \n\nIn addition to the above implied by Creative Commons and when clicking \"Access Repository\" below, you agree: \n\n1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.\n2. That your *contact information* (email address and username) can be shared with the model authors as well.\n ", "extra_gated_fields": {"I have read the License and agree with its terms": "checkbox"}} | 2022-12-10T02:16:13+00:00 | [] | [
"afr",
"af",
"aaa",
"abc",
"ada",
"adq",
"aeu",
"agq",
"ags",
"ahk",
"aia",
"ajz",
"aka",
"ak",
"ame",
"amh",
"am",
"amp",
"amu",
"ann",
"aph",
"awa",
"awb",
"azn",
"azo",
"bag",
"bam",
"bm",
"baw",
"bax",
"bbk",
"bcc",
"bce",
"bec",
"bef",
"ben",
"bn",
"bfd",
"bfm",
"bfn",
"bgf",
"bho",
"bhs",
"bis",
"bi",
"bjn",
"bjr",
"bkc",
"bkh",
"bkm",
"bkx",
"bob",
"bod",
"bo",
"boz",
"bqm",
"bra",
"brb",
"bri",
"brv",
"bss",
"bud",
"buo",
"bwt",
"bwx",
"bxa",
"bya",
"bze",
"bzi",
"cak",
"cbr",
"ceb",
"cgc",
"chd",
"chp",
"cim",
"clo",
"cmn",
"zh",
"cmo",
"csw",
"cuh",
"cuv",
"dag",
"ddg",
"ded",
"deu",
"de",
"dig",
"dje",
"dmg",
"dnw",
"dtp",
"dtr",
"dty",
"dug",
"eee",
"ekm",
"enb",
"enc",
"eng",
"en",
"ewo",
"fas",
"fa",
"fil",
"fli",
"fon",
"fra",
"fr",
"fub",
"fuh",
"gal",
"gbj",
"gou",
"gsw",
"guc",
"guj",
"gu",
"guz",
"gwc",
"hao",
"hat",
"ht",
"hau",
"ha",
"hbb",
"hig",
"hil",
"hin",
"hi",
"hla",
"hna",
"hre",
"hro",
"idt",
"ilo",
"ind",
"id",
"ino",
"isu",
"ita",
"it",
"jgo",
"jmx",
"jpn",
"ja",
"jra",
"kak",
"kam",
"kan",
"kn",
"kau",
"kr",
"kbq",
"kbx",
"kby",
"kek",
"ken",
"khb",
"khm",
"km",
"kik",
"ki",
"kin",
"rw",
"kir",
"ky",
"kjb",
"kmg",
"kmr",
"ku",
"kms",
"kmu",
"kor",
"ko",
"kqr",
"krr",
"ksw",
"kur",
"kvt",
"kwd",
"kwu",
"kwx",
"kxp",
"kyq",
"laj",
"lan",
"lao",
"lo",
"lbr",
"lfa",
"lgg",
"lgr",
"lhm",
"lhu",
"lkb",
"llg",
"lmp",
"lns",
"loh",
"lsi",
"lts",
"lug",
"lg",
"luy",
"lwl",
"mai",
"mal",
"ml",
"mam",
"mar",
"mr",
"mdr",
"mfh",
"mfj",
"mgg",
"mgm",
"mgo",
"mgq",
"mhx",
"miy",
"mkz",
"mle",
"mlk",
"mlw",
"mmu",
"mne",
"mnf",
"mnw",
"mot",
"mqj",
"mrn",
"mry",
"msb",
"muv",
"mve",
"mxu",
"mya",
"my",
"myk",
"myx",
"mzm",
"nas",
"nco",
"nep",
"ne",
"new",
"nge",
"ngn",
"nhx",
"njy",
"nla",
"nld",
"nl",
"nlv",
"nod",
"nsk",
"nsn",
"nso",
"nst",
"nuj",
"nwe",
"nwi",
"nxa",
"nxl",
"nya",
"ny",
"nyo",
"nyu",
"nza",
"odk",
"oji",
"oj",
"oki",
"omw",
"ori",
"or",
"ozm",
"pae",
"pag",
"pan",
"pa",
"pbt",
"pce",
"pcg",
"pdu",
"pea",
"pex",
"pis",
"pkb",
"pmf",
"pnz",
"por",
"pt",
"psp",
"pwg",
"qaa",
"qub",
"quc",
"quf",
"quz",
"qve",
"qvh",
"qvm",
"qvo",
"qxh",
"rel",
"rnl",
"ron",
"ro",
"roo",
"rue",
"rug",
"rus",
"ru",
"san",
"sa",
"saq",
"sat",
"sdk",
"sea",
"sgd",
"shn",
"sml",
"snk",
"snl",
"som",
"so",
"sot",
"st",
"sox",
"spa",
"es",
"sps",
"ssn",
"stk",
"swa",
"sw",
"swh",
"sxb",
"syw",
"taj",
"tam",
"ta",
"tbj",
"tdb",
"tdg",
"tdt",
"teo",
"tet",
"tgk",
"tg",
"tha",
"th",
"the",
"thk",
"thl",
"thy",
"tio",
"tkd",
"tnl",
"tnn",
"tnp",
"tnt",
"tod",
"tom",
"tpi",
"tpl",
"tpu",
"tsb",
"tsn",
"tn",
"tso",
"ts",
"tuv",
"tuz",
"tvs",
"udg",
"unr",
"urd",
"ur",
"uzb",
"uz",
"ven",
"ve",
"vie",
"vi",
"vif",
"war",
"wbm",
"wbr",
"wms",
"wni",
"wnk",
"wtk",
"xho",
"xh",
"xkg",
"xmd",
"xmg",
"xmm",
"xog",
"xty",
"yas",
"yav",
"ybb",
"ybh",
"ybi",
"ydd",
"yea",
"yet",
"yid",
"yi",
"yin",
"ymp",
"zaw",
"zho",
"zlm",
"zuh",
"zul",
"zu"
] | TAGS
#task_ids-image-captioning #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Afrikaans #language-Ghotuo #language-Ambala Ayta #language-Adangme #language-Adangbe #language-Akeu #language-Aghem #language-Esimbi #language-Akha #language-Arosi #language-Amri Karbi #language-Akan #language-Akan #language-Yanesha' #language-Amharic #language-Amharic #language-Alamblak #language-Guerrero Amuzgo #language-Obolo #language-Athpariya #language-Awadhi #language-Awa (Papua New Guinea) #language-Western Durango Nahuatl #language-Awing #language-Tuki #language-Bambara #language-Bambara #language-Bambili-Bambui #language-Bamun #language-Babanki #language-Southern Balochi #language-Bamenyam #language-Iceve-Maci #language-Benabena #language-Bengali #language-Bengali #language-Bafut #language-Mmen #language-Bunak #language-Bangandu #language-Bhojpuri #language-Buwal #language-Bislama #language-Bislama #language-Banjar #language-Binumarien #language-Baka (Cameroon) #language-Bakoko #language-Kom (Cameroon) #language-Baikeno #language-Aweer #language-Tibetan #language-Tibetan #language-Tiéyaxo Bozo #language-Wumboko #language-Braj #language-Brao #language-Mokpwe #language-Western Bru #language-Akoose #language-Ntcham #language-Terei #language-Bafaw-Balong #language-Bu-Nao Bunu #language-Tairaha #language-Batak #language-Jenaama Bozo #language-Bisu #language-Kaqchikel #language-Cashibo-Cacataibo #language-Cebuano #language-Kagayanen #language-Highland Oaxaca Chontal #language-Chipewyan #language-Cimbrian #language-Lowland Oaxaca Chontal #language-Mandarin Chinese #language-Chinese #language-Central Mnong #language-Swampy Cree #language-Chuka #language-Cuvok #language-Dagbani #language-Fataluku #language-Dedua #language-German #language-German #language-Digo #language-Zarma #language-Upper Kinabatangan #language-Western Dani #language-Kadazan Dusun #language-Lotud #language-Dotyali #language-Duruma #language-E #language-Elip #language-Markweeta #language-En #language-English #language-English #language-Ewondo #language-Persian #language-Persian #language-Filipino #language-Fali #language-Fon #language-French #language-French #language-Adamawa Fulfulde #language-Western Niger Fulfulde #language-Galolen #language-Bodo Gadaba #language-Gavar #language-Swiss German #language-Wayuu #language-Gujarati #language-Gujarati #language-Gusii #language-Gawri #language-Hakö #language-Haitian #language-Haitian #language-Hausa #language-Hausa #language-Huba #language-Kamwe #language-Hiligaynon #language-Hindi #language-Hindi #language-Halia #language-Mina (Cameroon) #language-Hre #language-Haroi #language-Idaté #language-Iloko #language-Indonesian #language-Indonesian #language-Inoke-Yate #language-Isu (Menchum Division) #language-Italian #language-Italian #language-Ngomba #language-Western Juxtlahuaca Mixtec #language-Japanese #language-Japanese #language-Jarai #language-Kalanguya #language-Kamba (Kenya) #language-Kannada #language-Kannada #language-Kanuri #language-Kanuri #language-Kamano #language-Ap Ma #language-Manga Kanuri #language-Kekchí #language-Kenyang #language-Lü #language-Khmer #language-Khmer #language-Kikuyu #language-Kikuyu #language-Kinyarwanda #language-Kinyarwanda #language-Kirghiz #language-Kirghiz #language-Q'anjob'al #language-Kâte #language-Northern Kurdish #language-Kurdish #language-Kamasau #language-Kanite #language-Korean #language-Korean #language-Kimaragang #language-Krung #language-S'gaw Karen #language-Kurdish #language-Lahta Karen #language-Kwaio #language-Kwakum #language-Khirwar #language-Wadiyara Koli #language-Kenga #language-Lango (Uganda) #language-Laru #language-Lao #language-Lao #language-Lohorung #language-Lefa #language-Lugbara #language-Lengo #language-Lhomi #language-Lahu #language-Kabras #language-Lole #language-Limbum #language-Lamnso' #language-Laarim #language-Lashi #language-Tachoni #language-Ganda #language-Ganda #language-Luyia #language-Eastern Lawa #language-Maithili #language-Malayalam #language-Malayalam #language-Mam #language-Marathi #language-Marathi #language-Mandar #language-Matal #language-Mefele #language-Mpumpong #language-Mambae #language-Meta' #language-Malila #language-Maru #language-Ayutla Mixtec #language-Makasae #language-Manambu #language-Ilwana #language-Moloko #language-Mmaala #language-Naba #language-Mundani #language-Mon #language-Barí #language-Mamasa #language-Cheke Holo #language-Mandaya #language-Masbatenyo #language-Muthuvan #language-Marwari (Pakistan) #language-Mada (Cameroon) #language-Burmese #language-Burmese #language-Mamara Senoufo #language-Masaaba #language-Mumuye #language-Naasioi #language-Sibe #language-Nepali (macrolanguage) #language-Nepali (macrolanguage) #language-Newari #language-Ngemba #language-Ngwo #language-Isthmus-Mecayapan Nahuatl #language-Njyem #language-Ngombale #language-Dutch #language-Dutch #language-Orizaba Nahuatl #language-Northern Thai #language-Naskapi #language-Nehan #language-Pedi #language-Tase Naga #language-Nyole #language-Ngwe #language-Southwest Tanna #language-Nauete #language-South Nuaulu #language-Nyanja #language-Nyanja #language-Nyoro #language-Nyungwe #language-Tigon Mbembe #language-Od #language-Ojibwa #language-Ojibwa #language-Okiek #language-South Tairora #language-Oriya (macrolanguage) #language-Oriya (macrolanguage) #language-Koonzime #language-Pagibete #language-Pangasinan #language-Panjabi #language-Panjabi #language-Southern Pashto #language-Ruching Palaung #language-Paniya #language-Kayan #language-Peranakan Indonesian #language-Petats #language-Pijin #language-Pokomo #language-Pamona #language-Pana (Central African Republic) #language-Portuguese #language-Portuguese #language-Philippine Sign Language #language-Gapapaiwa #language-qaa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-Cusco Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-Napo Lowland Quechua #language-Panao Huánuco Quechua #language-Rendille #language-Ranglong #language-Romanian #language-Romanian #language-Rotokas #language-Rusyn #language-Roviana #language-Russian #language-Russian #language-Sanskrit #language-Sanskrit #language-Samburu #language-Santali #language-Sos Kundi #language-Semai #language-Surigaonon #language-Shan #language-Central Sama #language-Soninke #language-Sangil #language-Somali #language-Somali #language-Southern Sotho #language-Southern Sotho #language-Swo #language-Spanish #language-Spanish #language-Saposa #language-Waata #language-Arammba #language-Swahili (macrolanguage) #language-Swahili (macrolanguage) #language-Swahili (individual language) #language-Suba #language-Kagate #language-Eastern Tamang #language-Tamil #language-Tamil #language-Tiang #language-Panchpargania #language-Western Tamang #language-Tetun Dili #language-Teso #language-Tetum #language-Tajik #language-Tajik #language-Thai #language-Thai #language-Chitwania Tharu #language-Tharaka #language-Dangaura Tharu #language-Tha #language-Teop #language-Tukudede #language-Lenakel #language-North Tanna #language-Whitesands #language-Tontemboan #language-Toma #language-Tombulu #language-Tok Pisin #language-Tlacoapa Me'phaa #language-Tampuan #language-Tsamai #language-Tswana #language-Tswana #language-Tsonga #language-Tsonga #language-Turkana #language-Turka #language-Taveta #language-Muduga #language-Mundari #language-Urdu #language-Urdu #language-Uzbek #language-Uzbek #language-Venda #language-Venda #language-Vietnamese #language-Vietnamese #language-Vili #language-Waray (Philippines) #language-Wa #language-Wagdi #language-Wambon #language-Ndzwani Comorian #language-Wanukaka #language-Watakataui #language-Xhosa #language-Xhosa #language-Kagoro #language-Mbudum #language-Mengaka #language-Manado Malay #language-Soga #language-Yoloxochitl Mixtec #language-Nugunu (Cameroon) #language-Yangben #language-Yemba #language-Yakha #language-Yamphu #language-Eastern Yiddish #language-Ravula #language-Yetfa #language-Yiddish #language-Yiddish #language-Riang Lai #language-Yamap #language-Mitla Zapotec #language-Chinese #language-Malay (individual language) #language-Tokano #language-Zulu #language-Zulu #license-cc-by-nc-4.0 #region-us
| Dataset Description
-------------------
* Homepage: SIL AI
* Point of Contact: SIL AI email
* Source Data: Bloom Library
!logo for Bloom Library !sil-ai logo
Dataset Summary
---------------
Bloom is free, open-source software and an associated website Bloom Library, app, and services developed by SIL International. Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the image captioning task. It includes data from 351 languages across 31 language families. There is a mean of 32 stories and 319 image-caption pairs per language.
Note: If you speak one of these languages and can help provide feedback or corrections, please let us know!
Note: Although this data was used in the training of the BLOOM model, this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool.
Languages
---------
Of the 500+ languages listed at URL, there are 351 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul
Dataset Statistics
------------------
Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following statistics:
Dataset Structure
-----------------
### Data Instances
The examples look like this for Hausa:
This would produce an output:
To download all of the images locally directory 'images', you can do something similar to the following:
### Data Fields
The metadata fields below are available:
* image\_id: a unique ID for the image
* image\_url: a link for downloading the image
* caption: a caption corresponding to the image
* story\_id: a unique ID for the corresponding story in which the caption appears
* album\_id: a unique ID for the corresponding album in which the image appears
* original\_bloom\_language\_tag: the original language identification from the Bloom library
* index\_in\_story: an index corresponding to the order of the image-caption pair in the corresponding story
### Data Splits
All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.
NOTE: The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder '' token.
Changelog
---------
* 25 October 2022 - Initial release
* 25 October 2022 - Update to include licenses on each data item.
| [
"### Data Instances\n\n\nThe examples look like this for Hausa:\n\n\nThis would produce an output:\n\n\nTo download all of the images locally directory 'images', you can do something similar to the following:",
"### Data Fields\n\n\nThe metadata fields below are available:\n\n\n* image\\_id: a unique ID for the image\n* image\\_url: a link for downloading the image\n* caption: a caption corresponding to the image\n* story\\_id: a unique ID for the corresponding story in which the caption appears\n* album\\_id: a unique ID for the corresponding album in which the image appears\n* original\\_bloom\\_language\\_tag: the original language identification from the Bloom library\n* index\\_in\\_story: an index corresponding to the order of the image-caption pair in the corresponding story",
"### Data Splits\n\n\nAll languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.\n\n\nNOTE: The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder '' token.\n\n\nChangelog\n---------\n\n\n* 25 October 2022 - Initial release\n* 25 October 2022 - Update to include licenses on each data item."
] | [
"TAGS\n#task_ids-image-captioning #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Afrikaans #language-Ghotuo #language-Ambala Ayta #language-Adangme #language-Adangbe #language-Akeu #language-Aghem #language-Esimbi #language-Akha #language-Arosi #language-Amri Karbi #language-Akan #language-Akan #language-Yanesha' #language-Amharic #language-Amharic #language-Alamblak #language-Guerrero Amuzgo #language-Obolo #language-Athpariya #language-Awadhi #language-Awa (Papua New Guinea) #language-Western Durango Nahuatl #language-Awing #language-Tuki #language-Bambara #language-Bambara #language-Bambili-Bambui #language-Bamun #language-Babanki #language-Southern Balochi #language-Bamenyam #language-Iceve-Maci #language-Benabena #language-Bengali #language-Bengali #language-Bafut #language-Mmen #language-Bunak #language-Bangandu #language-Bhojpuri #language-Buwal #language-Bislama #language-Bislama #language-Banjar #language-Binumarien #language-Baka (Cameroon) #language-Bakoko #language-Kom (Cameroon) #language-Baikeno #language-Aweer #language-Tibetan #language-Tibetan #language-Tiéyaxo Bozo #language-Wumboko #language-Braj #language-Brao #language-Mokpwe #language-Western Bru #language-Akoose #language-Ntcham #language-Terei #language-Bafaw-Balong #language-Bu-Nao Bunu #language-Tairaha #language-Batak #language-Jenaama Bozo #language-Bisu #language-Kaqchikel #language-Cashibo-Cacataibo #language-Cebuano #language-Kagayanen #language-Highland Oaxaca Chontal #language-Chipewyan #language-Cimbrian #language-Lowland Oaxaca Chontal #language-Mandarin Chinese #language-Chinese #language-Central Mnong #language-Swampy Cree #language-Chuka #language-Cuvok #language-Dagbani #language-Fataluku #language-Dedua #language-German #language-German #language-Digo #language-Zarma #language-Upper Kinabatangan #language-Western Dani #language-Kadazan Dusun #language-Lotud #language-Dotyali #language-Duruma #language-E #language-Elip #language-Markweeta #language-En #language-English #language-English #language-Ewondo #language-Persian #language-Persian #language-Filipino #language-Fali #language-Fon #language-French #language-French #language-Adamawa Fulfulde #language-Western Niger Fulfulde #language-Galolen #language-Bodo Gadaba #language-Gavar #language-Swiss German #language-Wayuu #language-Gujarati #language-Gujarati #language-Gusii #language-Gawri #language-Hakö #language-Haitian #language-Haitian #language-Hausa #language-Hausa #language-Huba #language-Kamwe #language-Hiligaynon #language-Hindi #language-Hindi #language-Halia #language-Mina (Cameroon) #language-Hre #language-Haroi #language-Idaté #language-Iloko #language-Indonesian #language-Indonesian #language-Inoke-Yate #language-Isu (Menchum Division) #language-Italian #language-Italian #language-Ngomba #language-Western Juxtlahuaca Mixtec #language-Japanese #language-Japanese #language-Jarai #language-Kalanguya #language-Kamba (Kenya) #language-Kannada #language-Kannada #language-Kanuri #language-Kanuri #language-Kamano #language-Ap Ma #language-Manga Kanuri #language-Kekchí #language-Kenyang #language-Lü #language-Khmer #language-Khmer #language-Kikuyu #language-Kikuyu #language-Kinyarwanda #language-Kinyarwanda #language-Kirghiz #language-Kirghiz #language-Q'anjob'al #language-Kâte #language-Northern Kurdish #language-Kurdish #language-Kamasau #language-Kanite #language-Korean #language-Korean #language-Kimaragang #language-Krung #language-S'gaw Karen #language-Kurdish #language-Lahta Karen #language-Kwaio #language-Kwakum #language-Khirwar #language-Wadiyara Koli #language-Kenga #language-Lango (Uganda) #language-Laru #language-Lao #language-Lao #language-Lohorung #language-Lefa #language-Lugbara #language-Lengo #language-Lhomi #language-Lahu #language-Kabras #language-Lole #language-Limbum #language-Lamnso' #language-Laarim #language-Lashi #language-Tachoni #language-Ganda #language-Ganda #language-Luyia #language-Eastern Lawa #language-Maithili #language-Malayalam #language-Malayalam #language-Mam #language-Marathi #language-Marathi #language-Mandar #language-Matal #language-Mefele #language-Mpumpong #language-Mambae #language-Meta' #language-Malila #language-Maru #language-Ayutla Mixtec #language-Makasae #language-Manambu #language-Ilwana #language-Moloko #language-Mmaala #language-Naba #language-Mundani #language-Mon #language-Barí #language-Mamasa #language-Cheke Holo #language-Mandaya #language-Masbatenyo #language-Muthuvan #language-Marwari (Pakistan) #language-Mada (Cameroon) #language-Burmese #language-Burmese #language-Mamara Senoufo #language-Masaaba #language-Mumuye #language-Naasioi #language-Sibe #language-Nepali (macrolanguage) #language-Nepali (macrolanguage) #language-Newari #language-Ngemba #language-Ngwo #language-Isthmus-Mecayapan Nahuatl #language-Njyem #language-Ngombale #language-Dutch #language-Dutch #language-Orizaba Nahuatl #language-Northern Thai #language-Naskapi #language-Nehan #language-Pedi #language-Tase Naga #language-Nyole #language-Ngwe #language-Southwest Tanna #language-Nauete #language-South Nuaulu #language-Nyanja #language-Nyanja #language-Nyoro #language-Nyungwe #language-Tigon Mbembe #language-Od #language-Ojibwa #language-Ojibwa #language-Okiek #language-South Tairora #language-Oriya (macrolanguage) #language-Oriya (macrolanguage) #language-Koonzime #language-Pagibete #language-Pangasinan #language-Panjabi #language-Panjabi #language-Southern Pashto #language-Ruching Palaung #language-Paniya #language-Kayan #language-Peranakan Indonesian #language-Petats #language-Pijin #language-Pokomo #language-Pamona #language-Pana (Central African Republic) #language-Portuguese #language-Portuguese #language-Philippine Sign Language #language-Gapapaiwa #language-qaa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-Cusco Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-Napo Lowland Quechua #language-Panao Huánuco Quechua #language-Rendille #language-Ranglong #language-Romanian #language-Romanian #language-Rotokas #language-Rusyn #language-Roviana #language-Russian #language-Russian #language-Sanskrit #language-Sanskrit #language-Samburu #language-Santali #language-Sos Kundi #language-Semai #language-Surigaonon #language-Shan #language-Central Sama #language-Soninke #language-Sangil #language-Somali #language-Somali #language-Southern Sotho #language-Southern Sotho #language-Swo #language-Spanish #language-Spanish #language-Saposa #language-Waata #language-Arammba #language-Swahili (macrolanguage) #language-Swahili (macrolanguage) #language-Swahili (individual language) #language-Suba #language-Kagate #language-Eastern Tamang #language-Tamil #language-Tamil #language-Tiang #language-Panchpargania #language-Western Tamang #language-Tetun Dili #language-Teso #language-Tetum #language-Tajik #language-Tajik #language-Thai #language-Thai #language-Chitwania Tharu #language-Tharaka #language-Dangaura Tharu #language-Tha #language-Teop #language-Tukudede #language-Lenakel #language-North Tanna #language-Whitesands #language-Tontemboan #language-Toma #language-Tombulu #language-Tok Pisin #language-Tlacoapa Me'phaa #language-Tampuan #language-Tsamai #language-Tswana #language-Tswana #language-Tsonga #language-Tsonga #language-Turkana #language-Turka #language-Taveta #language-Muduga #language-Mundari #language-Urdu #language-Urdu #language-Uzbek #language-Uzbek #language-Venda #language-Venda #language-Vietnamese #language-Vietnamese #language-Vili #language-Waray (Philippines) #language-Wa #language-Wagdi #language-Wambon #language-Ndzwani Comorian #language-Wanukaka #language-Watakataui #language-Xhosa #language-Xhosa #language-Kagoro #language-Mbudum #language-Mengaka #language-Manado Malay #language-Soga #language-Yoloxochitl Mixtec #language-Nugunu (Cameroon) #language-Yangben #language-Yemba #language-Yakha #language-Yamphu #language-Eastern Yiddish #language-Ravula #language-Yetfa #language-Yiddish #language-Yiddish #language-Riang Lai #language-Yamap #language-Mitla Zapotec #language-Chinese #language-Malay (individual language) #language-Tokano #language-Zulu #language-Zulu #license-cc-by-nc-4.0 #region-us \n",
"### Data Instances\n\n\nThe examples look like this for Hausa:\n\n\nThis would produce an output:\n\n\nTo download all of the images locally directory 'images', you can do something similar to the following:",
"### Data Fields\n\n\nThe metadata fields below are available:\n\n\n* image\\_id: a unique ID for the image\n* image\\_url: a link for downloading the image\n* caption: a caption corresponding to the image\n* story\\_id: a unique ID for the corresponding story in which the caption appears\n* album\\_id: a unique ID for the corresponding album in which the image appears\n* original\\_bloom\\_language\\_tag: the original language identification from the Bloom library\n* index\\_in\\_story: an index corresponding to the order of the image-caption pair in the corresponding story",
"### Data Splits\n\n\nAll languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.\n\n\nNOTE: The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder '' token.\n\n\nChangelog\n---------\n\n\n* 25 October 2022 - Initial release\n* 25 October 2022 - Update to include licenses on each data item."
] |
d9a3cfd6830ce040b34c1169d564227de87d5bf8 |
# Dataset Card for "crd3"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CRD3 homepage](https://github.com/RevanthRameshkumar/CRD3)
- **Repository:** [CRD3 repository](https://github.com/RevanthRameshkumar/CRD3)
- **Paper:** [Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset](https://www.aclweb.org/anthology/2020.acl-main.459/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 279.93 MB
- **Size of the generated dataset:** 4020.33 MB
- **Total amount of disk used:** 4300.25 MB
### Dataset Summary
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.
Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.
The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding
abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player
collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,
and semantic ties to the previous dialogues.
### Supported Tasks and Leaderboards
`summarization`: The dataset can be used to train a model for abstractive summarization. A [fast abstractive summarization-RL](https://github.com/ChenRocks/fast_abs_rl) model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
### Languages
The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### default
- **Size of downloaded dataset files:** 279.93 MB
- **Size of the generated dataset:** 4020.33 MB
- **Total amount of disk used:** 4300.25 MB
An example of 'train' looks as follows.
```
{
"alignment_score": 3.679936647415161,
"chunk": "Wish them a Happy Birthday on their Facebook and Twitter pages! Also, as a reminder: D&D Beyond streams their weekly show (\"And Beyond\") every Wednesday on twitch.tv/dndbeyond.",
"chunk_id": 1,
"turn_end": 6,
"turn_num": 4,
"turn_start": 4,
"turns": {
"names": ["SAM"],
"utterances": ["Yesterday, guys, was D&D Beyond's first one--", "first one-year anniversary. Take two. Hey guys,", "yesterday was D&D Beyond's one-year anniversary.", "Wish them a happy birthday on their Facebook and", "Twitter pages."]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `chunk`: a `string` feature.
- `chunk_id`: a `int32` feature.
- `turn_start`: a `int32` feature.
- `turn_end`: a `int32` feature.
- `alignment_score`: a `float32` feature.
- `turn_num`: a `int32` feature.
- `turns`: a dictionary feature containing:
- `names`: a `string` feature.
- `utterances`: a `string` feature.
### Data Splits
| name | train |validation| test |
|-------|------:|---------:|------:|
|default|26,232| 3,470|4,541|
## Dataset Creation
### Curation Rationale
Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.
### Source Data
#### Initial Data Collection and Normalization
Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.
The abstractive summaries were collected from the [Critical Role Fandom wiki](https://criticalrole.fandom.com/)
#### Who are the source language producers?
The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki https://criticalrole.fandom.com/
### Citation Information
```
@inproceedings{
title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
author = {Rameshkumar, Revanth and Bailey, Peter},
year = {2020},
publisher = {Association for Computational Linguistics},
conference = {ACL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
| shanya/crd3 | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-05-02T14:01:40+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "crd3", "pretty_name": "CRD3 (Critical Role Dungeons and Dragons Dataset)"} | 2022-10-25T09:13:08+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us
| Dataset Card for "crd3"
=======================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: CRD3 homepage
* Repository: CRD3 repository
* Paper: Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset
* Point of Contact:
* Size of downloaded dataset files: 279.93 MB
* Size of the generated dataset: 4020.33 MB
* Total amount of disk used: 4300.25 MB
### Dataset Summary
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.
Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.
The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding
abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player
collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,
and semantic ties to the previous dialogues.
### Supported Tasks and Leaderboards
'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
### Languages
The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
Dataset Structure
-----------------
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### default
* Size of downloaded dataset files: 279.93 MB
* Size of the generated dataset: 4020.33 MB
* Total amount of disk used: 4300.25 MB
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### default
* 'chunk': a 'string' feature.
* 'chunk\_id': a 'int32' feature.
* 'turn\_start': a 'int32' feature.
* 'turn\_end': a 'int32' feature.
* 'alignment\_score': a 'float32' feature.
* 'turn\_num': a 'int32' feature.
* 'turns': a dictionary feature containing:
+ 'names': a 'string' feature.
+ 'utterances': a 'string' feature.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.
### Source Data
#### Initial Data Collection and Normalization
Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.
The abstractive summaries were collected from the Critical Role Fandom wiki
#### Who are the source language producers?
The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki URL
### Contributions
Thanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset.
| [
"### Dataset Summary\n\n\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.",
"### Supported Tasks and Leaderboards\n\n\n'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.",
"### Languages\n\n\nThe text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 279.93 MB\n* Size of the generated dataset: 4020.33 MB\n* Total amount of disk used: 4300.25 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'chunk': a 'string' feature.\n* 'chunk\\_id': a 'int32' feature.\n* 'turn\\_start': a 'int32' feature.\n* 'turn\\_end': a 'int32' feature.\n* 'alignment\\_score': a 'float32' feature.\n* 'turn\\_num': a 'int32' feature.\n* 'turns': a dictionary feature containing:\n\t+ 'names': a 'string' feature.\n\t+ 'utterances': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nDungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.\n\n\nThe abstractive summaries were collected from the Critical Role Fandom wiki",
"#### Who are the source language producers?\n\n\nThe language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.",
"### Annotations",
"#### Annotation process\n\n\n[N/A]",
"#### Who are the annotators?\n\n\n[N/A]",
"### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nCRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.",
"### Licensing Information\n\n\nThis work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki URL",
"### Contributions\n\n\nThanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset."
] | [
"TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.",
"### Supported Tasks and Leaderboards\n\n\n'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.",
"### Languages\n\n\nThe text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 279.93 MB\n* Size of the generated dataset: 4020.33 MB\n* Total amount of disk used: 4300.25 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'chunk': a 'string' feature.\n* 'chunk\\_id': a 'int32' feature.\n* 'turn\\_start': a 'int32' feature.\n* 'turn\\_end': a 'int32' feature.\n* 'alignment\\_score': a 'float32' feature.\n* 'turn\\_num': a 'int32' feature.\n* 'turns': a dictionary feature containing:\n\t+ 'names': a 'string' feature.\n\t+ 'utterances': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nDungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.\n\n\nThe abstractive summaries were collected from the Critical Role Fandom wiki",
"#### Who are the source language producers?\n\n\nThe language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.",
"### Annotations",
"#### Annotation process\n\n\n[N/A]",
"#### Who are the annotators?\n\n\n[N/A]",
"### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nCRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.",
"### Licensing Information\n\n\nThis work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki URL",
"### Contributions\n\n\nThanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset."
] |
ff6d4fb32fd566d3a1fa20e946cba3234179465e |
# Dataset Card for WIT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)
- **Leaderboard:** [WIT leaderboard](https://paperswithcode.com/sota/text-image-retrieval-on-wit) and [WIT Kaggle competition](https://www.kaggle.com/competitions/wikipedia-image-caption/leaderboard)
- **Point of Contact:** [Miriam Redi](mailto:[email protected])
### Dataset Summary
Wikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.
>
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.
>
> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.
> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.
**Note**: Compared to [Google's version](https://huggingface.co/datasets/google/wit), which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- `text-retrieval`: The goal in this task is to build a model that retrieves the text (`caption_title_and_reference_description`) closest to an image. The leaderboard for this task can be found [here](https://paperswithcode.com/sota/text-image-retrieval-on-wit). This task also has a competition on [Kaggle](https://www.kaggle.com/c/wikipedia-image-caption).
In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages.
## Dataset Structure
### Data Instances
Each instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x225 at 0x7F88F3876358>,
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/8/8b/Scolopendra_gigantea.jpg',
'embedding': [1.4784087, 2.8710432, 0.0, 0.51603067, ..., 10.266883, 0.51142216, 0.0, 2.3464653],
'metadata_url': 'http://commons.wikimedia.org/wiki/File:Scolopendra_gigantea.jpg',
'original_height': 3000,
'original_width': 4000,
'mime_type': 'image/jpeg',
'caption_attribution_description': 'English: Puerto Rican Giant Centipede, Scolopendra gigantea; Vieques, Puerto Rico Slovenčina: Stonožka obrovská, Scolopendra gigantea; Vieques, Portoriko',
'wit_features': {
'language': ['ro', 'vi', 'sk', ..., 'nl', 'th', 'lv'],
'page_url': ['https://ro.wikipedia.org/wiki/Scolopendra_gigantea', 'https://vi.wikipedia.org/wiki/Scolopendra_gigantea', 'https://sk.wikipedia.org/wiki/Scolopendra_gigantea', ..., 'https://nl.wikipedia.org/wiki/Scolopendra_gigantea', 'https://th.wikipedia.org/wiki/%E0%B8%95%E0%B8%B0%E0%B8%82%E0%B8%B2%E0%B8%9A%E0%B8%A2%E0%B8%B1%E0%B8%81%E0%B8%A9%E0%B9%8C%E0%B8%82%E0%B8%B2%E0%B9%80%E0%B8%AB%E0%B8%A5%E0%B8%B7%E0%B8%AD%E0%B8%87%E0%B9%80%E0%B8%9B%E0%B8%A3%E0%B8%B9', 'https://lv.wikipedia.org/wiki/Skolopendru_dzimta'],
'attribution_passes_lang_id': [True, True, True, ..., True, True, True],
'caption_alt_text_description': [None, None, None, ..., 'Scolopendra gigantea', None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_reference_description': [None, None, None, ..., None, None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_title_and_reference_description': [None, 'Scolopendra gigantea [SEP] ', None, ..., 'Scolopendra gigantea [SEP] ', None, 'Skolopendru dzimta [SEP] Milzu skolopendra (Scolopendra gigantea)'],
'context_page_description': ['Scolopendra gigantea este un miriapod din clasa Chilopoda, fiind cel mai mare reprezentant al genului Scolopendra. Adultul poate atinge o lungime de 26 cm, uneori depășind 30 cm. Această specie habitează în regiunile de nord și de vest a Americii de Sud, pe insulele Trinidad, insulele Virgine, Jamaica Hispaniola ș.a. Localnicii denumesc scolopendra chilopodul gigant galben și chilopodul gigant amazonian.', 'Scolopendra gigantea là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26 cm và có thể vượt quá 30 cm. Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', 'Scolopendra gigantea, starší slovenský nazov: štípavica veľká, je živočích z rodu Scolopendra, s veľkosťou do 30 cm.', ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', 'ตะขาบยักษ์ขาเหลืองเปรู หรือ ตะขาบยักษ์อเมซอน เป็นตะขาบชนิดที่มีขนาดใหญ่ที่สุดในสกุล Scolopendra โดยปกติเมื่อโตเต็มที่จะยาว 26 เซนติเมตร แต่บางครั้งก็สามารถโตได้ถึง 30 เซนติเมตร ตะขาบชนิดนี้อาศัยอยู่ทางแถบเหนือและตะวันตกของทวีปอเมริกาใต้ และตามเกาะแก่งของประเทศตรินิแดดและจาไมกา เป็นสัตว์กินเนื้อ โดยกินจิ้งจก, กบ, นก, หนู และแม้แต่ค้างคาวเป็นอาหาร และขึ้นชื่อในเรื่องความดุร้าย', 'Skolpendru dzimta pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'context_section_description': [None, 'Scolopendra gigantea (còn được gọi là Rết chân vàng khổng lồ Peru và Rết khổng lồ Amazon) là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26\xa0cm (10\xa0in) và có thể vượt quá 30\xa0cm (12\xa0in). Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', None, ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', None, 'Skolpendru dzimta (Scolopendridae) pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'hierarchical_section_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'is_main_image': [True, True, True, ..., True, True, True],
'page_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'section_title': [None, None, None, ..., None, None, None]
}
}
```
**Note**: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using [this script](wit_base/blob/main/scripts/wit.py). Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: `original_height`, `original_width`, `mime_type` and `caption_attribution_description`. The fixed versions of these examples that were used in the generation script can be found [here](wit_base/blob/main/scripts/corrected_examples.py).
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `image_url`: URL to wikipedia image
- `embedding`: Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a [ResNet-50](https://arxiv.org/abs/1512.03385) neural network trained with [Imagenet](https://www.image-net.org/) data. These embeddings contain rich information about the image content and layout, in a compact form
- `metadata_url`: URL to wikimedia page containing the image and the metadata
- `original_height`: Original image height before resizing
- `original_width`: Original image width before resizing
- `mime_type`: Mime type associated to the image
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.
- `wit_features`: Sequence of captions for the image with language, page URL, information about the page, caption text, etc.
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `caption_reference_description`: This is the caption that is visible on the wikipedia page directly below the image.
- `caption_title_and_reference_description`: Concatenation of `page_title` and `caption_reference_description`.
- `context_page_description`: Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section
- `hierarchical_section_title`: Hierarchical section's title
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `page_changed_recently`: [More Information Needed]
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
<p align='center'>
<img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br>
<b>Figure: WIT annotation example. </b>
</p>
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 6477255 examples.
## Dataset Creation
### Curation Rationale
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.
> Getting easy access to the image files is crucial for participants to successfully develop competitive models.
> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ~124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
the image, "Maybe" if it is sufficiently explanatory and "No" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/#FN1):
> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the [RetinaFace](https://arxiv.org/abs/1905.00641) detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are [candidate for deletion](https://commons.wikimedia.org/wiki/Commons:Deletion_requests) on Commons from the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miriam Redi, Fabian Kaelin and Tiziano Piccardi.
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw), [yjernite](https://github.com/yjernite) and [mariosasko](https://github.com/mariosasko) for adding this dataset. | wikimedia/wit_base | [
"task_categories:image-to-text",
"task_categories:text-retrieval",
"task_ids:image-captioning",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|wikipedia",
"language:af",
"language:an",
"language:ar",
"language:arz",
"language:ast",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fil",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:hi",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:io",
"language:is",
"language:it",
"language:iw",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:kn",
"language:ko",
"language:la",
"language:lah",
"language:lb",
"language:lmo",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:my",
"language:nan",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:nv",
"language:oc",
"language:pa",
"language:pl",
"language:pt",
"language:qu",
"language:ro",
"language:ru",
"language:sco",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:tt",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vi",
"language:vo",
"language:war",
"language:xmf",
"language:yue",
"language:zh",
"license:cc-by-sa-4.0",
"text-image-retrieval",
"arxiv:2103.01913",
"arxiv:1512.03385",
"arxiv:1905.00641",
"region:us"
] | 2022-05-02T15:08:58+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["af", "an", "ar", "arz", "ast", "az", "azb", "ba", "bar", "be", "bg", "bn", "br", "bs", "ca", "ce", "ceb", "ckb", "cs", "cv", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gl", "hi", "hr", "hsb", "ht", "hu", "hy", "ia", "id", "io", "is", "it", "iw", "ja", "jv", "ka", "kk", "kn", "ko", "la", "lah", "lb", "lmo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "nan", "nds", "ne", "nl", "nn", "no", "nv", "oc", "pa", "pl", "pt", "qu", "ro", "ru", "sco", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "tg", "th", "tr", "tt", "uk", "ur", "uz", "vec", "vi", "vo", "war", "xmf", "yue", "zh"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original", "extended|wikipedia"], "task_categories": ["image-to-text", "text-retrieval"], "task_ids": ["image-captioning"], "paperswithcode_id": "wit", "pretty_name": "Wikipedia-based Image Text", "language_bcp47": ["af", "an", "ar", "arz", "ast", "az", "azb", "ba", "bar", "be", "be-tarask", "bg", "bn", "br", "bs", "ca", "ce", "ceb", "ckb", "cs", "cv", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gl", "hi", "hr", "hsb", "ht", "hu", "hy", "ia", "id", "io", "is", "it", "iw", "ja", "jv", "ka", "kk", "kn", "ko", "la", "lah", "lb", "lmo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "nan", "nds", "ne", "nl", "nn", "no", "nv", "oc", "pa", "pl", "pt", "qu", "ro", "ru", "sco", "si", "sk", "sl", "sq", "sr", "sr-Latn", "sv", "sw", "ta", "te", "tg", "th", "tr", "tt", "uk", "ur", "uz", "vec", "vi", "vo", "war", "xmf", "yue", "zh", "zh-TW"], "tags": ["text-image-retrieval"]} | 2022-11-04T15:09:33+00:00 | [
"2103.01913",
"1512.03385",
"1905.00641"
] | [
"af",
"an",
"ar",
"arz",
"ast",
"az",
"azb",
"ba",
"bar",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"ce",
"ceb",
"ckb",
"cs",
"cv",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gl",
"hi",
"hr",
"hsb",
"ht",
"hu",
"hy",
"ia",
"id",
"io",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"kn",
"ko",
"la",
"lah",
"lb",
"lmo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"nan",
"nds",
"ne",
"nl",
"nn",
"no",
"nv",
"oc",
"pa",
"pl",
"pt",
"qu",
"ro",
"ru",
"sco",
"si",
"sk",
"sl",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"tt",
"uk",
"ur",
"uz",
"vec",
"vi",
"vo",
"war",
"xmf",
"yue",
"zh"
] | TAGS
#task_categories-image-to-text #task_categories-text-retrieval #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|wikipedia #language-Afrikaans #language-Aragonese #language-Arabic #language-Egyptian Arabic #language-Asturian #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Bavarian #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Chechen #language-Cebuano #language-Central Kurdish #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-Filipino #language-French #language-Western Frisian #language-Irish #language-Galician #language-Hindi #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Ido #language-Icelandic #language-Italian #language-iw #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Kannada #language-Korean #language-Latin #language-Lahnda #language-Luxembourgish #language-Lombard #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Min Nan Chinese #language-Low German #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Navajo #language-Occitan (post 1500) #language-Panjabi #language-Polish #language-Portuguese #language-Quechua #language-Romanian #language-Russian #language-Scots #language-Sinhala #language-Slovak #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkish #language-Tatar #language-Ukrainian #language-Urdu #language-Uzbek #language-Venetian #language-Vietnamese #language-Volapük #language-Waray (Philippines) #language-Mingrelian #language-Yue Chinese #language-Chinese #license-cc-by-sa-4.0 #text-image-retrieval #arxiv-2103.01913 #arxiv-1512.03385 #arxiv-1905.00641 #region-us
|
# Dataset Card for WIT
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: WIT homepage
- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
- Leaderboard: WIT leaderboard and WIT Kaggle competition
- Point of Contact: Miriam Redi
### Dataset Summary
Wikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.
From the official blog post:
> The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.
>
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.
>
> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.
> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.
Note: Compared to Google's version, which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.
### Supported Tasks and Leaderboards
- 'image-captioning': This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- 'text-retrieval': The goal in this task is to build a model that retrieves the text ('caption_title_and_reference_description') closest to an image. The leaderboard for this task can be found here. This task also has a competition on Kaggle.
In these tasks, any combination of the 'caption_reference_description', 'caption_attribution_description' and 'caption_alt_text_description' fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages.
## Dataset Structure
### Data Instances
Each instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.
Note: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using this script. Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: 'original_height', 'original_width', 'mime_type' and 'caption_attribution_description'. The fixed versions of these examples that were used in the generation script can be found here.
### Data Fields
- 'image': A 'PIL.Image.Image' object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'.
- 'image_url': URL to wikipedia image
- 'embedding': Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a ResNet-50 neural network trained with Imagenet data. These embeddings contain rich information about the image content and layout, in a compact form
- 'metadata_url': URL to wikimedia page containing the image and the metadata
- 'original_height': Original image height before resizing
- 'original_width': Original image width before resizing
- 'mime_type': Mime type associated to the image
- 'caption_attribution_description': This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.
- 'wit_features': Sequence of captions for the image with language, page URL, information about the page, caption text, etc.
- 'language': Language code depicting wikipedia language of the page
- 'page_url': URL to wikipedia page
- 'attribution_passes_lang_id': Compared 'language' field with the attribution language (written in the prefix of the attribution description.
- 'caption_alt_text_description': This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- 'caption_reference_description': This is the caption that is visible on the wikipedia page directly below the image.
- 'caption_title_and_reference_description': Concatenation of 'page_title' and 'caption_reference_description'.
- 'context_page_description': Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- 'context_section_description': Text within the image's section
- 'hierarchical_section_title': Hierarchical section's title
- 'is_main_image': Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- 'page_changed_recently':
- 'page_title': Wikipedia page's title
- 'section_title': Section's title
<p align='center'>
<img width='75%' src='URL alt="Half Dome" /> </br>
<b>Figure: WIT annotation example. </b>
</p>
Details on the field content can be found directly in the paper, figure 5 and table 12.
### Data Splits
All data is held in 'train' split, with a total of 6477255 examples.
## Dataset Creation
### Curation Rationale
From the official blog post:
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.
> Getting easy access to the image files is crucial for participants to successfully develop competitive models.
> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.
### Source Data
#### Initial Data Collection and Normalization
From the paper, section 3.1:
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ~124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the paper, section 3.7:
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
the image, "Maybe" if it is sufficiently explanatory and "No" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
### Personal and Sensitive Information
From the official blog post:
> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
From the paper, section 3.4:
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
## Additional Information
### Dataset Curators
Miriam Redi, Fabian Kaelin and Tiziano Piccardi.
### Licensing Information
CC BY-SA 4.0 international license
### Contributions
Thanks to @nateraw, yjernite and mariosasko for adding this dataset. | [
"# Dataset Card for WIT",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: WIT homepage\n- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning\n\n- Leaderboard: WIT leaderboard and WIT Kaggle competition\n- Point of Contact: Miriam Redi",
"### Dataset Summary\n\nWikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.\n\nFrom the official blog post:\n\n> The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.\n>\n> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.\n>\n> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.\n\n> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.\n\nNote: Compared to Google's version, which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.",
"### Supported Tasks and Leaderboards\n\n- 'image-captioning': This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.\n\n- 'text-retrieval': The goal in this task is to build a model that retrieves the text ('caption_title_and_reference_description') closest to an image. The leaderboard for this task can be found here. This task also has a competition on Kaggle. \n\nIn these tasks, any combination of the 'caption_reference_description', 'caption_attribution_description' and 'caption_alt_text_description' fields can be used as the input text/caption.",
"### Languages\n\nThe dataset contains examples from all Wikipedia languages.",
"## Dataset Structure",
"### Data Instances\n\nEach instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.\n\n\n\nNote: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using this script. Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: 'original_height', 'original_width', 'mime_type' and 'caption_attribution_description'. The fixed versions of these examples that were used in the generation script can be found here.",
"### Data Fields\n\n- 'image': A 'PIL.Image.Image' object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n- 'image_url': URL to wikipedia image\n- 'embedding': Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a ResNet-50 neural network trained with Imagenet data. These embeddings contain rich information about the image content and layout, in a compact form\n- 'metadata_url': URL to wikimedia page containing the image and the metadata\n- 'original_height': Original image height before resizing\n- 'original_width': Original image width before resizing\n- 'mime_type': Mime type associated to the image\n- 'caption_attribution_description': This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.\n- 'wit_features': Sequence of captions for the image with language, page URL, information about the page, caption text, etc.\n - 'language': Language code depicting wikipedia language of the page\n - 'page_url': URL to wikipedia page\n - 'attribution_passes_lang_id': Compared 'language' field with the attribution language (written in the prefix of the attribution description.\n - 'caption_alt_text_description': This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers\n - 'caption_reference_description': This is the caption that is visible on the wikipedia page directly below the image.\n - 'caption_title_and_reference_description': Concatenation of 'page_title' and 'caption_reference_description'.\n - 'context_page_description': Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.\n - 'context_section_description': Text within the image's section\n - 'hierarchical_section_title': Hierarchical section's title\n - 'is_main_image': Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.\n - 'page_changed_recently': \n - 'page_title': Wikipedia page's title\n - 'section_title': Section's title\n\n<p align='center'>\n <img width='75%' src='URL alt=\"Half Dome\" /> </br>\n<b>Figure: WIT annotation example. </b>\n</p>\n\nDetails on the field content can be found directly in the paper, figure 5 and table 12.",
"### Data Splits\n\nAll data is held in 'train' split, with a total of 6477255 examples.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the official blog post: \n\n> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.\n\n> Getting easy access to the image files is crucial for participants to successfully develop competitive models.\n \n> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom the paper, section 3.1:\n\n> We started with all Wikipedia content pages (i.e., ignoring other\npages that have discussions, comments and such). These number about ~124M pages across 279 languages.",
"#### Who are the source language producers?\n\nText was extracted from Wikipedia.",
"### Annotations",
"#### Annotation process\n\nWIT was constructed using an automatic process. However it was human-validated.\n\nFrom the paper, section 3.7:\n\n> To further verify the quality of the WIT dataset we performed a\nstudy using (crowd-sourced) human annotators. As seen in Fig. 3,\nwe asked raters to answer 3 questions. Given an image and the page\ntitle, raters first evaluate the quality of the attribution description\nand reference description in the first two questions (order randomized). The third question understands the contextual quality of these\ntext descriptions given the page description and caption. Each response is on a 3-point scale: \"Yes\" if the text perfectly describes\nthe image, \"Maybe\" if it is sufficiently explanatory and \"No\" if it is\nirrelevant or the image is inappropriate.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nFrom the official blog post:\n\n> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nFrom the paper, section 3.4:\n\n> Lastly we found that certain image-text pairs occurred very\nfrequently. These were often generic images that did not have\nmuch to do with the main article page. Common examples\nincluded flags, logos, maps, insignia and such. To prevent\nbiasing the data, we heavily under-sampled all such images",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nMiriam Redi, Fabian Kaelin and Tiziano Piccardi.",
"### Licensing Information\n\nCC BY-SA 4.0 international license",
"### Contributions\n\nThanks to @nateraw, yjernite and mariosasko for adding this dataset."
] | [
"TAGS\n#task_categories-image-to-text #task_categories-text-retrieval #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|wikipedia #language-Afrikaans #language-Aragonese #language-Arabic #language-Egyptian Arabic #language-Asturian #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Bavarian #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Chechen #language-Cebuano #language-Central Kurdish #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-Filipino #language-French #language-Western Frisian #language-Irish #language-Galician #language-Hindi #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Ido #language-Icelandic #language-Italian #language-iw #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Kannada #language-Korean #language-Latin #language-Lahnda #language-Luxembourgish #language-Lombard #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Min Nan Chinese #language-Low German #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Navajo #language-Occitan (post 1500) #language-Panjabi #language-Polish #language-Portuguese #language-Quechua #language-Romanian #language-Russian #language-Scots #language-Sinhala #language-Slovak #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkish #language-Tatar #language-Ukrainian #language-Urdu #language-Uzbek #language-Venetian #language-Vietnamese #language-Volapük #language-Waray (Philippines) #language-Mingrelian #language-Yue Chinese #language-Chinese #license-cc-by-sa-4.0 #text-image-retrieval #arxiv-2103.01913 #arxiv-1512.03385 #arxiv-1905.00641 #region-us \n",
"# Dataset Card for WIT",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: WIT homepage\n- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning\n\n- Leaderboard: WIT leaderboard and WIT Kaggle competition\n- Point of Contact: Miriam Redi",
"### Dataset Summary\n\nWikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.\n\nFrom the official blog post:\n\n> The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.\n>\n> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.\n>\n> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.\n\n> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.\n\nNote: Compared to Google's version, which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.",
"### Supported Tasks and Leaderboards\n\n- 'image-captioning': This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.\n\n- 'text-retrieval': The goal in this task is to build a model that retrieves the text ('caption_title_and_reference_description') closest to an image. The leaderboard for this task can be found here. This task also has a competition on Kaggle. \n\nIn these tasks, any combination of the 'caption_reference_description', 'caption_attribution_description' and 'caption_alt_text_description' fields can be used as the input text/caption.",
"### Languages\n\nThe dataset contains examples from all Wikipedia languages.",
"## Dataset Structure",
"### Data Instances\n\nEach instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.\n\n\n\nNote: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using this script. Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: 'original_height', 'original_width', 'mime_type' and 'caption_attribution_description'. The fixed versions of these examples that were used in the generation script can be found here.",
"### Data Fields\n\n- 'image': A 'PIL.Image.Image' object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n- 'image_url': URL to wikipedia image\n- 'embedding': Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a ResNet-50 neural network trained with Imagenet data. These embeddings contain rich information about the image content and layout, in a compact form\n- 'metadata_url': URL to wikimedia page containing the image and the metadata\n- 'original_height': Original image height before resizing\n- 'original_width': Original image width before resizing\n- 'mime_type': Mime type associated to the image\n- 'caption_attribution_description': This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.\n- 'wit_features': Sequence of captions for the image with language, page URL, information about the page, caption text, etc.\n - 'language': Language code depicting wikipedia language of the page\n - 'page_url': URL to wikipedia page\n - 'attribution_passes_lang_id': Compared 'language' field with the attribution language (written in the prefix of the attribution description.\n - 'caption_alt_text_description': This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers\n - 'caption_reference_description': This is the caption that is visible on the wikipedia page directly below the image.\n - 'caption_title_and_reference_description': Concatenation of 'page_title' and 'caption_reference_description'.\n - 'context_page_description': Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.\n - 'context_section_description': Text within the image's section\n - 'hierarchical_section_title': Hierarchical section's title\n - 'is_main_image': Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.\n - 'page_changed_recently': \n - 'page_title': Wikipedia page's title\n - 'section_title': Section's title\n\n<p align='center'>\n <img width='75%' src='URL alt=\"Half Dome\" /> </br>\n<b>Figure: WIT annotation example. </b>\n</p>\n\nDetails on the field content can be found directly in the paper, figure 5 and table 12.",
"### Data Splits\n\nAll data is held in 'train' split, with a total of 6477255 examples.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the official blog post: \n\n> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.\n\n> Getting easy access to the image files is crucial for participants to successfully develop competitive models.\n \n> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom the paper, section 3.1:\n\n> We started with all Wikipedia content pages (i.e., ignoring other\npages that have discussions, comments and such). These number about ~124M pages across 279 languages.",
"#### Who are the source language producers?\n\nText was extracted from Wikipedia.",
"### Annotations",
"#### Annotation process\n\nWIT was constructed using an automatic process. However it was human-validated.\n\nFrom the paper, section 3.7:\n\n> To further verify the quality of the WIT dataset we performed a\nstudy using (crowd-sourced) human annotators. As seen in Fig. 3,\nwe asked raters to answer 3 questions. Given an image and the page\ntitle, raters first evaluate the quality of the attribution description\nand reference description in the first two questions (order randomized). The third question understands the contextual quality of these\ntext descriptions given the page description and caption. Each response is on a 3-point scale: \"Yes\" if the text perfectly describes\nthe image, \"Maybe\" if it is sufficiently explanatory and \"No\" if it is\nirrelevant or the image is inappropriate.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nFrom the official blog post:\n\n> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nFrom the paper, section 3.4:\n\n> Lastly we found that certain image-text pairs occurred very\nfrequently. These were often generic images that did not have\nmuch to do with the main article page. Common examples\nincluded flags, logos, maps, insignia and such. To prevent\nbiasing the data, we heavily under-sampled all such images",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nMiriam Redi, Fabian Kaelin and Tiziano Piccardi.",
"### Licensing Information\n\nCC BY-SA 4.0 international license",
"### Contributions\n\nThanks to @nateraw, yjernite and mariosasko for adding this dataset."
] |
014711311cec8b5959350c373878a3311caeb764 |
# Dataset Card for ImageNet
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://image-net.org/index.php
- **Repository:**
- **Paper:** https://arxiv.org/abs/1409.0575
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagenet?tag_filter=171
- **Point of Contact:** mailto: [email protected]
### Dataset Summary
ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated.
💡 This dataset provides access to ImageNet (ILSVRC) 2012 which is the most commonly used **subset** of ImageNet. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. The version also has the [patch](https://drive.google.com/file/d/16RYnHpVOW0XKCsn3G3S9GTHUyoV2-4WX/view) which fixes some of the corrupted test set images already applied. For full ImageNet dataset presented in [[2]](https://ieeexplore.ieee.org/abstract/document/5206848), please check the download section of the [main website](https://image-net.org/download-images.php).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 1000 ImageNet classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-imagenet?tag_filter=171).
To evaluate the `imagenet-classification` accuracy on the test split, one must first create an account at https://image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:
```
670 778 794 387 650
217 691 564 909 364
737 369 430 531 124
755 930 755 512 152
```
The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz. Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See `imagenet2012_labels.txt`.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
An example looks like below:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x276021C5EB8>,
'label': 23
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: an `int` classification label. -1 for `test` set as the labels are missing.
The labels are indexed based on a sorted list of synset ids such as `n07565083` which we automatically map to original class names. The original dataset is divided into folders based on these synset ids. To get a mapping from original synset names, use the file [LOC_synset_mapping.txt](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=LOC_synset_mapping.txt) available on Kaggle challenge page. You can also use `dataset_instance.features["labels"].int2str` function to get the class for a particular label index. Also note that, labels for test set are returned as -1 as they are missing.
<details>
<summary>
Click here to see the full list of ImageNet class labels mapping:
</summary>
|id|Class|
|--|-----|
|0 | tench, Tinca tinca|
|1 | goldfish, Carassius auratus|
|2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias|
|3 | tiger shark, Galeocerdo cuvieri|
|4 | hammerhead, hammerhead shark|
|5 | electric ray, crampfish, numbfish, torpedo|
|6 | stingray|
|7 | cock|
|8 | hen|
|9 | ostrich, Struthio camelus|
|10 | brambling, Fringilla montifringilla|
|11 | goldfinch, Carduelis carduelis|
|12 | house finch, linnet, Carpodacus mexicanus|
|13 | junco, snowbird|
|14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea|
|15 | robin, American robin, Turdus migratorius|
|16 | bulbul|
|17 | jay|
|18 | magpie|
|19 | chickadee|
|20 | water ouzel, dipper|
|21 | kite|
|22 | bald eagle, American eagle, Haliaeetus leucocephalus|
|23 | vulture|
|24 | great grey owl, great gray owl, Strix nebulosa|
|25 | European fire salamander, Salamandra salamandra|
|26 | common newt, Triturus vulgaris|
|27 | eft|
|28 | spotted salamander, Ambystoma maculatum|
|29 | axolotl, mud puppy, Ambystoma mexicanum|
|30 | bullfrog, Rana catesbeiana|
|31 | tree frog, tree-frog|
|32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui|
|33 | loggerhead, loggerhead turtle, Caretta caretta|
|34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea|
|35 | mud turtle|
|36 | terrapin|
|37 | box turtle, box tortoise|
|38 | banded gecko|
|39 | common iguana, iguana, Iguana iguana|
|40 | American chameleon, anole, Anolis carolinensis|
|41 | whiptail, whiptail lizard|
|42 | agama|
|43 | frilled lizard, Chlamydosaurus kingi|
|44 | alligator lizard|
|45 | Gila monster, Heloderma suspectum|
|46 | green lizard, Lacerta viridis|
|47 | African chameleon, Chamaeleo chamaeleon|
|48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis|
|49 | African crocodile, Nile crocodile, Crocodylus niloticus|
|50 | American alligator, Alligator mississipiensis|
|51 | triceratops|
|52 | thunder snake, worm snake, Carphophis amoenus|
|53 | ringneck snake, ring-necked snake, ring snake|
|54 | hognose snake, puff adder, sand viper|
|55 | green snake, grass snake|
|56 | king snake, kingsnake|
|57 | garter snake, grass snake|
|58 | water snake|
|59 | vine snake|
|60 | night snake, Hypsiglena torquata|
|61 | boa constrictor, Constrictor constrictor|
|62 | rock python, rock snake, Python sebae|
|63 | Indian cobra, Naja naja|
|64 | green mamba|
|65 | sea snake|
|66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus|
|67 | diamondback, diamondback rattlesnake, Crotalus adamanteus|
|68 | sidewinder, horned rattlesnake, Crotalus cerastes|
|69 | trilobite|
|70 | harvestman, daddy longlegs, Phalangium opilio|
|71 | scorpion|
|72 | black and gold garden spider, Argiope aurantia|
|73 | barn spider, Araneus cavaticus|
|74 | garden spider, Aranea diademata|
|75 | black widow, Latrodectus mactans|
|76 | tarantula|
|77 | wolf spider, hunting spider|
|78 | tick|
|79 | centipede|
|80 | black grouse|
|81 | ptarmigan|
|82 | ruffed grouse, partridge, Bonasa umbellus|
|83 | prairie chicken, prairie grouse, prairie fowl|
|84 | peacock|
|85 | quail|
|86 | partridge|
|87 | African grey, African gray, Psittacus erithacus|
|88 | macaw|
|89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita|
|90 | lorikeet|
|91 | coucal|
|92 | bee eater|
|93 | hornbill|
|94 | hummingbird|
|95 | jacamar|
|96 | toucan|
|97 | drake|
|98 | red-breasted merganser, Mergus serrator|
|99 | goose|
|100 | black swan, Cygnus atratus|
|101 | tusker|
|102 | echidna, spiny anteater, anteater|
|103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus|
|104 | wallaby, brush kangaroo|
|105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus|
|106 | wombat|
|107 | jellyfish|
|108 | sea anemone, anemone|
|109 | brain coral|
|110 | flatworm, platyhelminth|
|111 | nematode, nematode worm, roundworm|
|112 | conch|
|113 | snail|
|114 | slug|
|115 | sea slug, nudibranch|
|116 | chiton, coat-of-mail shell, sea cradle, polyplacophore|
|117 | chambered nautilus, pearly nautilus, nautilus|
|118 | Dungeness crab, Cancer magister|
|119 | rock crab, Cancer irroratus|
|120 | fiddler crab|
|121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica|
|122 | American lobster, Northern lobster, Maine lobster, Homarus americanus|
|123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish|
|124 | crayfish, crawfish, crawdad, crawdaddy|
|125 | hermit crab|
|126 | isopod|
|127 | white stork, Ciconia ciconia|
|128 | black stork, Ciconia nigra|
|129 | spoonbill|
|130 | flamingo|
|131 | little blue heron, Egretta caerulea|
|132 | American egret, great white heron, Egretta albus|
|133 | bittern|
|134 | crane|
|135 | limpkin, Aramus pictus|
|136 | European gallinule, Porphyrio porphyrio|
|137 | American coot, marsh hen, mud hen, water hen, Fulica americana|
|138 | bustard|
|139 | ruddy turnstone, Arenaria interpres|
|140 | red-backed sandpiper, dunlin, Erolia alpina|
|141 | redshank, Tringa totanus|
|142 | dowitcher|
|143 | oystercatcher, oyster catcher|
|144 | pelican|
|145 | king penguin, Aptenodytes patagonica|
|146 | albatross, mollymawk|
|147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus|
|148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca|
|149 | dugong, Dugong dugon|
|150 | sea lion|
|151 | Chihuahua|
|152 | Japanese spaniel|
|153 | Maltese dog, Maltese terrier, Maltese|
|154 | Pekinese, Pekingese, Peke|
|155 | Shih-Tzu|
|156 | Blenheim spaniel|
|157 | papillon|
|158 | toy terrier|
|159 | Rhodesian ridgeback|
|160 | Afghan hound, Afghan|
|161 | basset, basset hound|
|162 | beagle|
|163 | bloodhound, sleuthhound|
|164 | bluetick|
|165 | black-and-tan coonhound|
|166 | Walker hound, Walker foxhound|
|167 | English foxhound|
|168 | redbone|
|169 | borzoi, Russian wolfhound|
|170 | Irish wolfhound|
|171 | Italian greyhound|
|172 | whippet|
|173 | Ibizan hound, Ibizan Podenco|
|174 | Norwegian elkhound, elkhound|
|175 | otterhound, otter hound|
|176 | Saluki, gazelle hound|
|177 | Scottish deerhound, deerhound|
|178 | Weimaraner|
|179 | Staffordshire bullterrier, Staffordshire bull terrier|
|180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier|
|181 | Bedlington terrier|
|182 | Border terrier|
|183 | Kerry blue terrier|
|184 | Irish terrier|
|185 | Norfolk terrier|
|186 | Norwich terrier|
|187 | Yorkshire terrier|
|188 | wire-haired fox terrier|
|189 | Lakeland terrier|
|190 | Sealyham terrier, Sealyham|
|191 | Airedale, Airedale terrier|
|192 | cairn, cairn terrier|
|193 | Australian terrier|
|194 | Dandie Dinmont, Dandie Dinmont terrier|
|195 | Boston bull, Boston terrier|
|196 | miniature schnauzer|
|197 | giant schnauzer|
|198 | standard schnauzer|
|199 | Scotch terrier, Scottish terrier, Scottie|
|200 | Tibetan terrier, chrysanthemum dog|
|201 | silky terrier, Sydney silky|
|202 | soft-coated wheaten terrier|
|203 | West Highland white terrier|
|204 | Lhasa, Lhasa apso|
|205 | flat-coated retriever|
|206 | curly-coated retriever|
|207 | golden retriever|
|208 | Labrador retriever|
|209 | Chesapeake Bay retriever|
|210 | German short-haired pointer|
|211 | vizsla, Hungarian pointer|
|212 | English setter|
|213 | Irish setter, red setter|
|214 | Gordon setter|
|215 | Brittany spaniel|
|216 | clumber, clumber spaniel|
|217 | English springer, English springer spaniel|
|218 | Welsh springer spaniel|
|219 | cocker spaniel, English cocker spaniel, cocker|
|220 | Sussex spaniel|
|221 | Irish water spaniel|
|222 | kuvasz|
|223 | schipperke|
|224 | groenendael|
|225 | malinois|
|226 | briard|
|227 | kelpie|
|228 | komondor|
|229 | Old English sheepdog, bobtail|
|230 | Shetland sheepdog, Shetland sheep dog, Shetland|
|231 | collie|
|232 | Border collie|
|233 | Bouvier des Flandres, Bouviers des Flandres|
|234 | Rottweiler|
|235 | German shepherd, German shepherd dog, German police dog, alsatian|
|236 | Doberman, Doberman pinscher|
|237 | miniature pinscher|
|238 | Greater Swiss Mountain dog|
|239 | Bernese mountain dog|
|240 | Appenzeller|
|241 | EntleBucher|
|242 | boxer|
|243 | bull mastiff|
|244 | Tibetan mastiff|
|245 | French bulldog|
|246 | Great Dane|
|247 | Saint Bernard, St Bernard|
|248 | Eskimo dog, husky|
|249 | malamute, malemute, Alaskan malamute|
|250 | Siberian husky|
|251 | dalmatian, coach dog, carriage dog|
|252 | affenpinscher, monkey pinscher, monkey dog|
|253 | basenji|
|254 | pug, pug-dog|
|255 | Leonberg|
|256 | Newfoundland, Newfoundland dog|
|257 | Great Pyrenees|
|258 | Samoyed, Samoyede|
|259 | Pomeranian|
|260 | chow, chow chow|
|261 | keeshond|
|262 | Brabancon griffon|
|263 | Pembroke, Pembroke Welsh corgi|
|264 | Cardigan, Cardigan Welsh corgi|
|265 | toy poodle|
|266 | miniature poodle|
|267 | standard poodle|
|268 | Mexican hairless|
|269 | timber wolf, grey wolf, gray wolf, Canis lupus|
|270 | white wolf, Arctic wolf, Canis lupus tundrarum|
|271 | red wolf, maned wolf, Canis rufus, Canis niger|
|272 | coyote, prairie wolf, brush wolf, Canis latrans|
|273 | dingo, warrigal, warragal, Canis dingo|
|274 | dhole, Cuon alpinus|
|275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus|
|276 | hyena, hyaena|
|277 | red fox, Vulpes vulpes|
|278 | kit fox, Vulpes macrotis|
|279 | Arctic fox, white fox, Alopex lagopus|
|280 | grey fox, gray fox, Urocyon cinereoargenteus|
|281 | tabby, tabby cat|
|282 | tiger cat|
|283 | Persian cat|
|284 | Siamese cat, Siamese|
|285 | Egyptian cat|
|286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor|
|287 | lynx, catamount|
|288 | leopard, Panthera pardus|
|289 | snow leopard, ounce, Panthera uncia|
|290 | jaguar, panther, Panthera onca, Felis onca|
|291 | lion, king of beasts, Panthera leo|
|292 | tiger, Panthera tigris|
|293 | cheetah, chetah, Acinonyx jubatus|
|294 | brown bear, bruin, Ursus arctos|
|295 | American black bear, black bear, Ursus americanus, Euarctos americanus|
|296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus|
|297 | sloth bear, Melursus ursinus, Ursus ursinus|
|298 | mongoose|
|299 | meerkat, mierkat|
|300 | tiger beetle|
|301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle|
|302 | ground beetle, carabid beetle|
|303 | long-horned beetle, longicorn, longicorn beetle|
|304 | leaf beetle, chrysomelid|
|305 | dung beetle|
|306 | rhinoceros beetle|
|307 | weevil|
|308 | fly|
|309 | bee|
|310 | ant, emmet, pismire|
|311 | grasshopper, hopper|
|312 | cricket|
|313 | walking stick, walkingstick, stick insect|
|314 | cockroach, roach|
|315 | mantis, mantid|
|316 | cicada, cicala|
|317 | leafhopper|
|318 | lacewing, lacewing fly|
|319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk|
|320 | damselfly|
|321 | admiral|
|322 | ringlet, ringlet butterfly|
|323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus|
|324 | cabbage butterfly|
|325 | sulphur butterfly, sulfur butterfly|
|326 | lycaenid, lycaenid butterfly|
|327 | starfish, sea star|
|328 | sea urchin|
|329 | sea cucumber, holothurian|
|330 | wood rabbit, cottontail, cottontail rabbit|
|331 | hare|
|332 | Angora, Angora rabbit|
|333 | hamster|
|334 | porcupine, hedgehog|
|335 | fox squirrel, eastern fox squirrel, Sciurus niger|
|336 | marmot|
|337 | beaver|
|338 | guinea pig, Cavia cobaya|
|339 | sorrel|
|340 | zebra|
|341 | hog, pig, grunter, squealer, Sus scrofa|
|342 | wild boar, boar, Sus scrofa|
|343 | warthog|
|344 | hippopotamus, hippo, river horse, Hippopotamus amphibius|
|345 | ox|
|346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis|
|347 | bison|
|348 | ram, tup|
|349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis|
|350 | ibex, Capra ibex|
|351 | hartebeest|
|352 | impala, Aepyceros melampus|
|353 | gazelle|
|354 | Arabian camel, dromedary, Camelus dromedarius|
|355 | llama|
|356 | weasel|
|357 | mink|
|358 | polecat, fitch, foulmart, foumart, Mustela putorius|
|359 | black-footed ferret, ferret, Mustela nigripes|
|360 | otter|
|361 | skunk, polecat, wood pussy|
|362 | badger|
|363 | armadillo|
|364 | three-toed sloth, ai, Bradypus tridactylus|
|365 | orangutan, orang, orangutang, Pongo pygmaeus|
|366 | gorilla, Gorilla gorilla|
|367 | chimpanzee, chimp, Pan troglodytes|
|368 | gibbon, Hylobates lar|
|369 | siamang, Hylobates syndactylus, Symphalangus syndactylus|
|370 | guenon, guenon monkey|
|371 | patas, hussar monkey, Erythrocebus patas|
|372 | baboon|
|373 | macaque|
|374 | langur|
|375 | colobus, colobus monkey|
|376 | proboscis monkey, Nasalis larvatus|
|377 | marmoset|
|378 | capuchin, ringtail, Cebus capucinus|
|379 | howler monkey, howler|
|380 | titi, titi monkey|
|381 | spider monkey, Ateles geoffroyi|
|382 | squirrel monkey, Saimiri sciureus|
|383 | Madagascar cat, ring-tailed lemur, Lemur catta|
|384 | indri, indris, Indri indri, Indri brevicaudatus|
|385 | Indian elephant, Elephas maximus|
|386 | African elephant, Loxodonta africana|
|387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens|
|388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca|
|389 | barracouta, snoek|
|390 | eel|
|391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch|
|392 | rock beauty, Holocanthus tricolor|
|393 | anemone fish|
|394 | sturgeon|
|395 | gar, garfish, garpike, billfish, Lepisosteus osseus|
|396 | lionfish|
|397 | puffer, pufferfish, blowfish, globefish|
|398 | abacus|
|399 | abaya|
|400 | academic gown, academic robe, judge's robe|
|401 | accordion, piano accordion, squeeze box|
|402 | acoustic guitar|
|403 | aircraft carrier, carrier, flattop, attack aircraft carrier|
|404 | airliner|
|405 | airship, dirigible|
|406 | altar|
|407 | ambulance|
|408 | amphibian, amphibious vehicle|
|409 | analog clock|
|410 | apiary, bee house|
|411 | apron|
|412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin|
|413 | assault rifle, assault gun|
|414 | backpack, back pack, knapsack, packsack, rucksack, haversack|
|415 | bakery, bakeshop, bakehouse|
|416 | balance beam, beam|
|417 | balloon|
|418 | ballpoint, ballpoint pen, ballpen, Biro|
|419 | Band Aid|
|420 | banjo|
|421 | bannister, banister, balustrade, balusters, handrail|
|422 | barbell|
|423 | barber chair|
|424 | barbershop|
|425 | barn|
|426 | barometer|
|427 | barrel, cask|
|428 | barrow, garden cart, lawn cart, wheelbarrow|
|429 | baseball|
|430 | basketball|
|431 | bassinet|
|432 | bassoon|
|433 | bathing cap, swimming cap|
|434 | bath towel|
|435 | bathtub, bathing tub, bath, tub|
|436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon|
|437 | beacon, lighthouse, beacon light, pharos|
|438 | beaker|
|439 | bearskin, busby, shako|
|440 | beer bottle|
|441 | beer glass|
|442 | bell cote, bell cot|
|443 | bib|
|444 | bicycle-built-for-two, tandem bicycle, tandem|
|445 | bikini, two-piece|
|446 | binder, ring-binder|
|447 | binoculars, field glasses, opera glasses|
|448 | birdhouse|
|449 | boathouse|
|450 | bobsled, bobsleigh, bob|
|451 | bolo tie, bolo, bola tie, bola|
|452 | bonnet, poke bonnet|
|453 | bookcase|
|454 | bookshop, bookstore, bookstall|
|455 | bottlecap|
|456 | bow|
|457 | bow tie, bow-tie, bowtie|
|458 | brass, memorial tablet, plaque|
|459 | brassiere, bra, bandeau|
|460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty|
|461 | breastplate, aegis, egis|
|462 | broom|
|463 | bucket, pail|
|464 | buckle|
|465 | bulletproof vest|
|466 | bullet train, bullet|
|467 | butcher shop, meat market|
|468 | cab, hack, taxi, taxicab|
|469 | caldron, cauldron|
|470 | candle, taper, wax light|
|471 | cannon|
|472 | canoe|
|473 | can opener, tin opener|
|474 | cardigan|
|475 | car mirror|
|476 | carousel, carrousel, merry-go-round, roundabout, whirligig|
|477 | carpenter's kit, tool kit|
|478 | carton|
|479 | car wheel|
|480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM|
|481 | cassette|
|482 | cassette player|
|483 | castle|
|484 | catamaran|
|485 | CD player|
|486 | cello, violoncello|
|487 | cellular telephone, cellular phone, cellphone, cell, mobile phone|
|488 | chain|
|489 | chainlink fence|
|490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour|
|491 | chain saw, chainsaw|
|492 | chest|
|493 | chiffonier, commode|
|494 | chime, bell, gong|
|495 | china cabinet, china closet|
|496 | Christmas stocking|
|497 | church, church building|
|498 | cinema, movie theater, movie theatre, movie house, picture palace|
|499 | cleaver, meat cleaver, chopper|
|500 | cliff dwelling|
|501 | cloak|
|502 | clog, geta, patten, sabot|
|503 | cocktail shaker|
|504 | coffee mug|
|505 | coffeepot|
|506 | coil, spiral, volute, whorl, helix|
|507 | combination lock|
|508 | computer keyboard, keypad|
|509 | confectionery, confectionary, candy store|
|510 | container ship, containership, container vessel|
|511 | convertible|
|512 | corkscrew, bottle screw|
|513 | cornet, horn, trumpet, trump|
|514 | cowboy boot|
|515 | cowboy hat, ten-gallon hat|
|516 | cradle|
|517 | crane_1|
|518 | crash helmet|
|519 | crate|
|520 | crib, cot|
|521 | Crock Pot|
|522 | croquet ball|
|523 | crutch|
|524 | cuirass|
|525 | dam, dike, dyke|
|526 | desk|
|527 | desktop computer|
|528 | dial telephone, dial phone|
|529 | diaper, nappy, napkin|
|530 | digital clock|
|531 | digital watch|
|532 | dining table, board|
|533 | dishrag, dishcloth|
|534 | dishwasher, dish washer, dishwashing machine|
|535 | disk brake, disc brake|
|536 | dock, dockage, docking facility|
|537 | dogsled, dog sled, dog sleigh|
|538 | dome|
|539 | doormat, welcome mat|
|540 | drilling platform, offshore rig|
|541 | drum, membranophone, tympan|
|542 | drumstick|
|543 | dumbbell|
|544 | Dutch oven|
|545 | electric fan, blower|
|546 | electric guitar|
|547 | electric locomotive|
|548 | entertainment center|
|549 | envelope|
|550 | espresso maker|
|551 | face powder|
|552 | feather boa, boa|
|553 | file, file cabinet, filing cabinet|
|554 | fireboat|
|555 | fire engine, fire truck|
|556 | fire screen, fireguard|
|557 | flagpole, flagstaff|
|558 | flute, transverse flute|
|559 | folding chair|
|560 | football helmet|
|561 | forklift|
|562 | fountain|
|563 | fountain pen|
|564 | four-poster|
|565 | freight car|
|566 | French horn, horn|
|567 | frying pan, frypan, skillet|
|568 | fur coat|
|569 | garbage truck, dustcart|
|570 | gasmask, respirator, gas helmet|
|571 | gas pump, gasoline pump, petrol pump, island dispenser|
|572 | goblet|
|573 | go-kart|
|574 | golf ball|
|575 | golfcart, golf cart|
|576 | gondola|
|577 | gong, tam-tam|
|578 | gown|
|579 | grand piano, grand|
|580 | greenhouse, nursery, glasshouse|
|581 | grille, radiator grille|
|582 | grocery store, grocery, food market, market|
|583 | guillotine|
|584 | hair slide|
|585 | hair spray|
|586 | half track|
|587 | hammer|
|588 | hamper|
|589 | hand blower, blow dryer, blow drier, hair dryer, hair drier|
|590 | hand-held computer, hand-held microcomputer|
|591 | handkerchief, hankie, hanky, hankey|
|592 | hard disc, hard disk, fixed disk|
|593 | harmonica, mouth organ, harp, mouth harp|
|594 | harp|
|595 | harvester, reaper|
|596 | hatchet|
|597 | holster|
|598 | home theater, home theatre|
|599 | honeycomb|
|600 | hook, claw|
|601 | hoopskirt, crinoline|
|602 | horizontal bar, high bar|
|603 | horse cart, horse-cart|
|604 | hourglass|
|605 | iPod|
|606 | iron, smoothing iron|
|607 | jack-o'-lantern|
|608 | jean, blue jean, denim|
|609 | jeep, landrover|
|610 | jersey, T-shirt, tee shirt|
|611 | jigsaw puzzle|
|612 | jinrikisha, ricksha, rickshaw|
|613 | joystick|
|614 | kimono|
|615 | knee pad|
|616 | knot|
|617 | lab coat, laboratory coat|
|618 | ladle|
|619 | lampshade, lamp shade|
|620 | laptop, laptop computer|
|621 | lawn mower, mower|
|622 | lens cap, lens cover|
|623 | letter opener, paper knife, paperknife|
|624 | library|
|625 | lifeboat|
|626 | lighter, light, igniter, ignitor|
|627 | limousine, limo|
|628 | liner, ocean liner|
|629 | lipstick, lip rouge|
|630 | Loafer|
|631 | lotion|
|632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system|
|633 | loupe, jeweler's loupe|
|634 | lumbermill, sawmill|
|635 | magnetic compass|
|636 | mailbag, postbag|
|637 | mailbox, letter box|
|638 | maillot|
|639 | maillot, tank suit|
|640 | manhole cover|
|641 | maraca|
|642 | marimba, xylophone|
|643 | mask|
|644 | matchstick|
|645 | maypole|
|646 | maze, labyrinth|
|647 | measuring cup|
|648 | medicine chest, medicine cabinet|
|649 | megalith, megalithic structure|
|650 | microphone, mike|
|651 | microwave, microwave oven|
|652 | military uniform|
|653 | milk can|
|654 | minibus|
|655 | miniskirt, mini|
|656 | minivan|
|657 | missile|
|658 | mitten|
|659 | mixing bowl|
|660 | mobile home, manufactured home|
|661 | Model T|
|662 | modem|
|663 | monastery|
|664 | monitor|
|665 | moped|
|666 | mortar|
|667 | mortarboard|
|668 | mosque|
|669 | mosquito net|
|670 | motor scooter, scooter|
|671 | mountain bike, all-terrain bike, off-roader|
|672 | mountain tent|
|673 | mouse, computer mouse|
|674 | mousetrap|
|675 | moving van|
|676 | muzzle|
|677 | nail|
|678 | neck brace|
|679 | necklace|
|680 | nipple|
|681 | notebook, notebook computer|
|682 | obelisk|
|683 | oboe, hautboy, hautbois|
|684 | ocarina, sweet potato|
|685 | odometer, hodometer, mileometer, milometer|
|686 | oil filter|
|687 | organ, pipe organ|
|688 | oscilloscope, scope, cathode-ray oscilloscope, CRO|
|689 | overskirt|
|690 | oxcart|
|691 | oxygen mask|
|692 | packet|
|693 | paddle, boat paddle|
|694 | paddlewheel, paddle wheel|
|695 | padlock|
|696 | paintbrush|
|697 | pajama, pyjama, pj's, jammies|
|698 | palace|
|699 | panpipe, pandean pipe, syrinx|
|700 | paper towel|
|701 | parachute, chute|
|702 | parallel bars, bars|
|703 | park bench|
|704 | parking meter|
|705 | passenger car, coach, carriage|
|706 | patio, terrace|
|707 | pay-phone, pay-station|
|708 | pedestal, plinth, footstall|
|709 | pencil box, pencil case|
|710 | pencil sharpener|
|711 | perfume, essence|
|712 | Petri dish|
|713 | photocopier|
|714 | pick, plectrum, plectron|
|715 | pickelhaube|
|716 | picket fence, paling|
|717 | pickup, pickup truck|
|718 | pier|
|719 | piggy bank, penny bank|
|720 | pill bottle|
|721 | pillow|
|722 | ping-pong ball|
|723 | pinwheel|
|724 | pirate, pirate ship|
|725 | pitcher, ewer|
|726 | plane, carpenter's plane, woodworking plane|
|727 | planetarium|
|728 | plastic bag|
|729 | plate rack|
|730 | plow, plough|
|731 | plunger, plumber's helper|
|732 | Polaroid camera, Polaroid Land camera|
|733 | pole|
|734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria|
|735 | poncho|
|736 | pool table, billiard table, snooker table|
|737 | pop bottle, soda bottle|
|738 | pot, flowerpot|
|739 | potter's wheel|
|740 | power drill|
|741 | prayer rug, prayer mat|
|742 | printer|
|743 | prison, prison house|
|744 | projectile, missile|
|745 | projector|
|746 | puck, hockey puck|
|747 | punching bag, punch bag, punching ball, punchball|
|748 | purse|
|749 | quill, quill pen|
|750 | quilt, comforter, comfort, puff|
|751 | racer, race car, racing car|
|752 | racket, racquet|
|753 | radiator|
|754 | radio, wireless|
|755 | radio telescope, radio reflector|
|756 | rain barrel|
|757 | recreational vehicle, RV, R.V.|
|758 | reel|
|759 | reflex camera|
|760 | refrigerator, icebox|
|761 | remote control, remote|
|762 | restaurant, eating house, eating place, eatery|
|763 | revolver, six-gun, six-shooter|
|764 | rifle|
|765 | rocking chair, rocker|
|766 | rotisserie|
|767 | rubber eraser, rubber, pencil eraser|
|768 | rugby ball|
|769 | rule, ruler|
|770 | running shoe|
|771 | safe|
|772 | safety pin|
|773 | saltshaker, salt shaker|
|774 | sandal|
|775 | sarong|
|776 | sax, saxophone|
|777 | scabbard|
|778 | scale, weighing machine|
|779 | school bus|
|780 | schooner|
|781 | scoreboard|
|782 | screen, CRT screen|
|783 | screw|
|784 | screwdriver|
|785 | seat belt, seatbelt|
|786 | sewing machine|
|787 | shield, buckler|
|788 | shoe shop, shoe-shop, shoe store|
|789 | shoji|
|790 | shopping basket|
|791 | shopping cart|
|792 | shovel|
|793 | shower cap|
|794 | shower curtain|
|795 | ski|
|796 | ski mask|
|797 | sleeping bag|
|798 | slide rule, slipstick|
|799 | sliding door|
|800 | slot, one-armed bandit|
|801 | snorkel|
|802 | snowmobile|
|803 | snowplow, snowplough|
|804 | soap dispenser|
|805 | soccer ball|
|806 | sock|
|807 | solar dish, solar collector, solar furnace|
|808 | sombrero|
|809 | soup bowl|
|810 | space bar|
|811 | space heater|
|812 | space shuttle|
|813 | spatula|
|814 | speedboat|
|815 | spider web, spider's web|
|816 | spindle|
|817 | sports car, sport car|
|818 | spotlight, spot|
|819 | stage|
|820 | steam locomotive|
|821 | steel arch bridge|
|822 | steel drum|
|823 | stethoscope|
|824 | stole|
|825 | stone wall|
|826 | stopwatch, stop watch|
|827 | stove|
|828 | strainer|
|829 | streetcar, tram, tramcar, trolley, trolley car|
|830 | stretcher|
|831 | studio couch, day bed|
|832 | stupa, tope|
|833 | submarine, pigboat, sub, U-boat|
|834 | suit, suit of clothes|
|835 | sundial|
|836 | sunglass|
|837 | sunglasses, dark glasses, shades|
|838 | sunscreen, sunblock, sun blocker|
|839 | suspension bridge|
|840 | swab, swob, mop|
|841 | sweatshirt|
|842 | swimming trunks, bathing trunks|
|843 | swing|
|844 | switch, electric switch, electrical switch|
|845 | syringe|
|846 | table lamp|
|847 | tank, army tank, armored combat vehicle, armoured combat vehicle|
|848 | tape player|
|849 | teapot|
|850 | teddy, teddy bear|
|851 | television, television system|
|852 | tennis ball|
|853 | thatch, thatched roof|
|854 | theater curtain, theatre curtain|
|855 | thimble|
|856 | thresher, thrasher, threshing machine|
|857 | throne|
|858 | tile roof|
|859 | toaster|
|860 | tobacco shop, tobacconist shop, tobacconist|
|861 | toilet seat|
|862 | torch|
|863 | totem pole|
|864 | tow truck, tow car, wrecker|
|865 | toyshop|
|866 | tractor|
|867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi|
|868 | tray|
|869 | trench coat|
|870 | tricycle, trike, velocipede|
|871 | trimaran|
|872 | tripod|
|873 | triumphal arch|
|874 | trolleybus, trolley coach, trackless trolley|
|875 | trombone|
|876 | tub, vat|
|877 | turnstile|
|878 | typewriter keyboard|
|879 | umbrella|
|880 | unicycle, monocycle|
|881 | upright, upright piano|
|882 | vacuum, vacuum cleaner|
|883 | vase|
|884 | vault|
|885 | velvet|
|886 | vending machine|
|887 | vestment|
|888 | viaduct|
|889 | violin, fiddle|
|890 | volleyball|
|891 | waffle iron|
|892 | wall clock|
|893 | wallet, billfold, notecase, pocketbook|
|894 | wardrobe, closet, press|
|895 | warplane, military plane|
|896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin|
|897 | washer, automatic washer, washing machine|
|898 | water bottle|
|899 | water jug|
|900 | water tower|
|901 | whiskey jug|
|902 | whistle|
|903 | wig|
|904 | window screen|
|905 | window shade|
|906 | Windsor tie|
|907 | wine bottle|
|908 | wing|
|909 | wok|
|910 | wooden spoon|
|911 | wool, woolen, woollen|
|912 | worm fence, snake fence, snake-rail fence, Virginia fence|
|913 | wreck|
|914 | yawl|
|915 | yurt|
|916 | web site, website, internet site, site|
|917 | comic book|
|918 | crossword puzzle, crossword|
|919 | street sign|
|920 | traffic light, traffic signal, stoplight|
|921 | book jacket, dust cover, dust jacket, dust wrapper|
|922 | menu|
|923 | plate|
|924 | guacamole|
|925 | consomme|
|926 | hot pot, hotpot|
|927 | trifle|
|928 | ice cream, icecream|
|929 | ice lolly, lolly, lollipop, popsicle|
|930 | French loaf|
|931 | bagel, beigel|
|932 | pretzel|
|933 | cheeseburger|
|934 | hotdog, hot dog, red hot|
|935 | mashed potato|
|936 | head cabbage|
|937 | broccoli|
|938 | cauliflower|
|939 | zucchini, courgette|
|940 | spaghetti squash|
|941 | acorn squash|
|942 | butternut squash|
|943 | cucumber, cuke|
|944 | artichoke, globe artichoke|
|945 | bell pepper|
|946 | cardoon|
|947 | mushroom|
|948 | Granny Smith|
|949 | strawberry|
|950 | orange|
|951 | lemon|
|952 | fig|
|953 | pineapple, ananas|
|954 | banana|
|955 | jackfruit, jak, jack|
|956 | custard apple|
|957 | pomegranate|
|958 | hay|
|959 | carbonara|
|960 | chocolate sauce, chocolate syrup|
|961 | dough|
|962 | meat loaf, meatloaf|
|963 | pizza, pizza pie|
|964 | potpie|
|965 | burrito|
|966 | red wine|
|967 | espresso|
|968 | cup|
|969 | eggnog|
|970 | alp|
|971 | bubble|
|972 | cliff, drop, drop-off|
|973 | coral reef|
|974 | geyser|
|975 | lakeside, lakeshore|
|976 | promontory, headland, head, foreland|
|977 | sandbar, sand bar|
|978 | seashore, coast, seacoast, sea-coast|
|979 | valley, vale|
|980 | volcano|
|981 | ballplayer, baseball player|
|982 | groom, bridegroom|
|983 | scuba diver|
|984 | rapeseed|
|985 | daisy|
|986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum|
|987 | corn|
|988 | acorn|
|989 | hip, rose hip, rosehip|
|990 | buckeye, horse chestnut, conker|
|991 | coral fungus|
|992 | agaric|
|993 | gyromitra|
|994 | stinkhorn, carrion fungus|
|995 | earthstar|
|996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa|
|997 | bolete|
|998 | ear, spike, capitulum|
|999 | toilet tissue, toilet paper, bathroom tissue|
</details>
### Data Splits
| |train |validation| test |
|-------------|------:|---------:|------:|
|# of examples|1281167|50000 |100000 |
## Dataset Creation
### Curation Rationale
The ImageNet project was inspired by two important needs in computer vision research. The first was the need to establish a clear North Star problem in computer vision. While the field enjoyed an abundance of important tasks to work on, from stereo vision to image retrieval, from 3D reconstruction to image segmentation, object categorization was recognized to be one of the most fundamental capabilities of both human and machine vision. Hence there was a growing demand for a high quality object categorization benchmark with clearly established evaluation metrics. Second, there was a critical need for more data to enable more generalizable machine learning methods. Ever since the birth of the digital era and the availability of web-scale data exchanges, researchers in these fields have been working hard to design more and more sophisticated algorithms to index, retrieve, organize and annotate multimedia data. But good research requires good resources. To tackle this problem at scale (think of your growing personal collection of digital images, or videos, or a commercial web search engine’s database), it was critical to provide researchers with a large-scale image database for both training and testing. The convergence of these two intellectual reasons motivated us to build ImageNet.
### Source Data
#### Initial Data Collection and Normalization
Initial data for ImageNet image classification task consists of photographs collected from [Flickr](https://www.flickr.com) and other search engines, manually labeled with the presence of one of 1000 object categories. Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs [1](https://ieeexplore.ieee.org/abstract/document/5206848). The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow.
#### Who are the source language producers?
WordNet synsets further quality controlled by human annotators. The images are from Flickr.
### Annotations
#### Annotation process
The annotation process of collecting ImageNet for image classification task is a three step process.
1. Defining the 1000 object categories for the image classification task. These categories have evolved over the years.
1. Collecting the candidate image for these object categories using a search engine.
1. Quality control on the candidate images by using human annotators on Amazon Mechanical Turk (AMT) to make sure the image has the synset it was collected for.
See the section 3.1 in [1](https://arxiv.org/abs/1409.0575) for more details on data collection procedure and [2](https://ieeexplore.ieee.org/abstract/document/5206848) for general information on ImageNet.
#### Who are the annotators?
Images are automatically fetched from an image search engine based on the synsets and filtered using human annotators on Amazon Mechanical Turk. See [1](https://arxiv.org/abs/1409.0575) for more details.
### Personal and Sensitive Information
The 1,000 categories selected for this subset contain only 3 people categories (scuba diver, bridegroom, and baseball player) while the full ImageNet contains 2,832 people categories under the person subtree (accounting for roughly 8.3% of the total images). This subset does contain the images of people without their consent. Though, the study in [[1]](https://image-net.org/face-obfuscation/) on obfuscating faces of the people in the ImageNet 2012 subset shows that blurring people's faces causes a very minor decrease in accuracy (~0.6%) suggesting that privacy-aware models can be trained on ImageNet. On larger ImageNet, there has been [an attempt](https://arxiv.org/abs/1912.07726) at filtering and balancing the people subtree in the larger ImageNet.
## Considerations for Using the Data
### Social Impact of Dataset
The ImageNet dataset has been very crucial in advancement of deep learning technology as being the standard benchmark for the computer vision models. The dataset aims to probe models on their understanding of the objects and has become the de-facto dataset for this purpose. ImageNet is still one of the major datasets on which models are evaluated for their generalization in computer vision capabilities as the field moves towards self-supervised algorithms. Please see the future section in [1](https://arxiv.org/abs/1409.0575) for a discussion on social impact of the dataset.
### Discussion of Biases
1. A [study](https://image-net.org/update-sep-17-2019.php) of the history of the multiple layers (taxonomy, object classes and labeling) of ImageNet and WordNet in 2019 described how bias is deeply embedded in most classification approaches for of all sorts of images.
1. A [study](https://arxiv.org/abs/1811.12231) has also shown that ImageNet trained models are biased towards texture rather than shapes which in contrast with how humans do object classification. Increasing the shape bias improves the accuracy and robustness.
1. Another [study](https://arxiv.org/abs/2109.13228) more potential issues and biases with the ImageNet dataset and provides an alternative benchmark for image classification task. The data collected contains humans without their consent.
1. ImageNet data with face obfuscation is also provided at [this link](https://image-net.org/face-obfuscation/)
1. A study on genealogy of ImageNet is can be found at [this link](https://journals.sagepub.com/doi/full/10.1177/20539517211035955) about the "norms, values, and assumptions" in ImageNet.
1. See [this study](https://arxiv.org/abs/1912.07726) on filtering and balancing the distribution of people subtree in the larger complete ImageNet.
### Other Known Limitations
1. Since most of the images were collected from internet, keep in mind that some images in ImageNet might be subject to copyrights. See the following papers for more details: [[1]](https://arxiv.org/abs/2109.13228) [[2]](https://arxiv.org/abs/1409.0575) [[3]](https://ieeexplore.ieee.org/abstract/document/5206848).
## Additional Information
### Dataset Curators
Authors of [[1]](https://arxiv.org/abs/1409.0575) and [[2]](https://ieeexplore.ieee.org/abstract/document/5206848):
- Olga Russakovsky
- Jia Deng
- Hao Su
- Jonathan Krause
- Sanjeev Satheesh
- Wei Dong
- Richard Socher
- Li-Jia Li
- Kai Li
- Sean Ma
- Zhiheng Huang
- Andrej Karpathy
- Aditya Khosla
- Michael Bernstein
- Alexander C Berg
- Li Fei-Fei
### Licensing Information
In exchange for permission to use the ImageNet database (the "Database") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
1. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
1. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.
1. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
1. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
1. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
1. The law of the State of New Jersey shall apply to all disputes under this agreement.
### Citation Information
```bibtex
@article{imagenet15russakovsky,
Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
Title = { {ImageNet Large Scale Visual Recognition Challenge} },
Year = {2015},
journal = {International Journal of Computer Vision (IJCV)},
doi = {10.1007/s11263-015-0816-y},
volume={115},
number={3},
pages={211-252}
}
```
### Contributions
Thanks to [@apsdehal](https://github.com/apsdehal) for adding this dataset. | imagenet-1k | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1409.0575",
"arxiv:1912.07726",
"arxiv:1811.12231",
"arxiv:2109.13228",
"region:us"
] | 2022-05-02T15:33:23+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "imagenet-1k-1", "pretty_name": "ImageNet", "license_details": "imagenet-agreement", "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to ImageNet Terms of Access:\n[RESEARCHER_FULLNAME] (the \"Researcher\") has requested permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University, Stanford University and Hugging Face make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, Stanford University and Hugging Face, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University, Stanford University and Hugging Face reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement.", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "tench, Tinca tinca", "1": "goldfish, Carassius auratus", "2": "great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3": "tiger shark, Galeocerdo cuvieri", "4": "hammerhead, hammerhead shark", "5": "electric ray, crampfish, numbfish, torpedo", "6": "stingray", "7": "cock", "8": "hen", "9": "ostrich, Struthio camelus", "10": "brambling, Fringilla montifringilla", "11": "goldfinch, Carduelis carduelis", "12": "house finch, linnet, Carpodacus mexicanus", "13": "junco, snowbird", "14": "indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15": "robin, American robin, Turdus migratorius", "16": "bulbul", "17": "jay", "18": "magpie", "19": "chickadee", "20": "water ouzel, dipper", "21": "kite", "22": "bald eagle, American eagle, Haliaeetus leucocephalus", "23": "vulture", "24": "great grey owl, great gray owl, Strix nebulosa", "25": "European fire salamander, Salamandra salamandra", "26": "common newt, Triturus vulgaris", "27": "eft", "28": "spotted salamander, Ambystoma maculatum", "29": "axolotl, mud puppy, Ambystoma mexicanum", "30": "bullfrog, Rana catesbeiana", "31": "tree frog, tree-frog", "32": "tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33": "loggerhead, loggerhead turtle, Caretta caretta", "34": "leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35": "mud turtle", "36": "terrapin", "37": "box turtle, box tortoise", "38": "banded gecko", "39": "common iguana, iguana, Iguana iguana", "40": "American chameleon, anole, Anolis carolinensis", "41": "whiptail, whiptail lizard", "42": "agama", "43": "frilled lizard, Chlamydosaurus kingi", "44": "alligator lizard", "45": "Gila monster, Heloderma suspectum", "46": "green lizard, Lacerta viridis", "47": "African chameleon, Chamaeleo chamaeleon", "48": "Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49": "African crocodile, Nile crocodile, Crocodylus niloticus", "50": "American alligator, Alligator mississipiensis", "51": "triceratops", "52": "thunder snake, worm snake, Carphophis amoenus", "53": "ringneck snake, ring-necked snake, ring snake", "54": "hognose snake, puff adder, sand viper", "55": "green snake, grass snake", "56": "king snake, kingsnake", "57": "garter snake, grass snake", "58": "water snake", "59": "vine snake", "60": "night snake, Hypsiglena torquata", "61": "boa constrictor, Constrictor constrictor", "62": "rock python, rock snake, Python sebae", "63": "Indian cobra, Naja naja", "64": "green mamba", "65": "sea snake", "66": "horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67": "diamondback, diamondback rattlesnake, Crotalus adamanteus", "68": "sidewinder, horned rattlesnake, Crotalus cerastes", "69": "trilobite", "70": "harvestman, daddy longlegs, Phalangium opilio", "71": "scorpion", "72": "black and gold garden spider, Argiope aurantia", "73": "barn spider, Araneus cavaticus", "74": "garden spider, Aranea diademata", "75": "black widow, Latrodectus mactans", "76": "tarantula", "77": "wolf spider, hunting spider", "78": "tick", "79": "centipede", "80": "black grouse", "81": "ptarmigan", "82": "ruffed grouse, partridge, Bonasa umbellus", "83": "prairie chicken, prairie grouse, prairie fowl", "84": "peacock", "85": "quail", "86": "partridge", "87": "African grey, African gray, Psittacus erithacus", "88": "macaw", "89": "sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90": "lorikeet", "91": "coucal", "92": "bee eater", "93": "hornbill", "94": "hummingbird", "95": "jacamar", "96": "toucan", "97": "drake", "98": "red-breasted merganser, Mergus serrator", "99": "goose", "100": "black swan, Cygnus atratus", "101": "tusker", "102": "echidna, spiny anteater, anteater", "103": "platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104": "wallaby, brush kangaroo", "105": "koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106": "wombat", "107": "jellyfish", "108": "sea anemone, anemone", "109": "brain coral", "110": "flatworm, platyhelminth", "111": "nematode, nematode worm, roundworm", "112": "conch", "113": "snail", "114": "slug", "115": "sea slug, nudibranch", "116": "chiton, coat-of-mail shell, sea cradle, polyplacophore", "117": "chambered nautilus, pearly nautilus, nautilus", "118": "Dungeness crab, Cancer magister", "119": "rock crab, Cancer irroratus", "120": "fiddler crab", "121": "king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122": "American lobster, Northern lobster, Maine lobster, Homarus americanus", "123": "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124": "crayfish, crawfish, crawdad, crawdaddy", "125": "hermit crab", "126": "isopod", "127": "white stork, Ciconia ciconia", "128": "black stork, Ciconia nigra", "129": "spoonbill", "130": "flamingo", "131": "little blue heron, Egretta caerulea", "132": "American egret, great white heron, Egretta albus", "133": "bittern", "134": "crane", "135": "limpkin, Aramus pictus", "136": "European gallinule, Porphyrio porphyrio", "137": "American coot, marsh hen, mud hen, water hen, Fulica americana", "138": "bustard", "139": "ruddy turnstone, Arenaria interpres", "140": "red-backed sandpiper, dunlin, Erolia alpina", "141": "redshank, Tringa totanus", "142": "dowitcher", "143": "oystercatcher, oyster catcher", "144": "pelican", "145": "king penguin, Aptenodytes patagonica", "146": "albatross, mollymawk", "147": "grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148": "killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149": "dugong, Dugong dugon", "150": "sea lion", "151": "Chihuahua", "152": "Japanese spaniel", "153": "Maltese dog, Maltese terrier, Maltese", "154": "Pekinese, Pekingese, Peke", "155": "Shih-Tzu", "156": "Blenheim spaniel", "157": "papillon", "158": "toy terrier", "159": "Rhodesian ridgeback", "160": "Afghan hound, Afghan", "161": "basset, basset hound", "162": "beagle", "163": "bloodhound, sleuthhound", "164": "bluetick", "165": "black-and-tan coonhound", "166": "Walker hound, Walker foxhound", "167": "English foxhound", "168": "redbone", "169": "borzoi, Russian wolfhound", "170": "Irish wolfhound", "171": "Italian greyhound", "172": "whippet", "173": "Ibizan hound, Ibizan Podenco", "174": "Norwegian elkhound, elkhound", "175": "otterhound, otter hound", "176": "Saluki, gazelle hound", "177": "Scottish deerhound, deerhound", "178": "Weimaraner", "179": "Staffordshire bullterrier, Staffordshire bull terrier", "180": "American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181": "Bedlington terrier", "182": "Border terrier", "183": "Kerry blue terrier", "184": "Irish terrier", "185": "Norfolk terrier", "186": "Norwich terrier", "187": "Yorkshire terrier", "188": "wire-haired fox terrier", "189": "Lakeland terrier", "190": "Sealyham terrier, Sealyham", "191": "Airedale, Airedale terrier", "192": "cairn, cairn terrier", "193": "Australian terrier", "194": "Dandie Dinmont, Dandie Dinmont terrier", "195": "Boston bull, Boston terrier", "196": "miniature schnauzer", "197": "giant schnauzer", "198": "standard schnauzer", "199": "Scotch terrier, Scottish terrier, Scottie", "200": "Tibetan terrier, chrysanthemum dog", "201": "silky terrier, Sydney silky", "202": "soft-coated wheaten terrier", "203": "West Highland white terrier", "204": "Lhasa, Lhasa apso", "205": "flat-coated retriever", "206": "curly-coated retriever", "207": "golden retriever", "208": "Labrador retriever", "209": "Chesapeake Bay retriever", "210": "German short-haired pointer", "211": "vizsla, Hungarian pointer", "212": "English setter", "213": "Irish setter, red setter", "214": "Gordon setter", "215": "Brittany spaniel", "216": "clumber, clumber spaniel", "217": "English springer, English springer spaniel", "218": "Welsh springer spaniel", "219": "cocker spaniel, English cocker spaniel, cocker", "220": "Sussex spaniel", "221": "Irish water spaniel", "222": "kuvasz", "223": "schipperke", "224": "groenendael", "225": "malinois", "226": "briard", "227": "kelpie", "228": "komondor", "229": "Old English sheepdog, bobtail", "230": "Shetland sheepdog, Shetland sheep dog, Shetland", "231": "collie", "232": "Border collie", "233": "Bouvier des Flandres, Bouviers des Flandres", "234": "Rottweiler", "235": "German shepherd, German shepherd dog, German police dog, alsatian", "236": "Doberman, Doberman pinscher", "237": "miniature pinscher", "238": "Greater Swiss Mountain dog", "239": "Bernese mountain dog", "240": "Appenzeller", "241": "EntleBucher", "242": "boxer", "243": "bull mastiff", "244": "Tibetan mastiff", "245": "French bulldog", "246": "Great Dane", "247": "Saint Bernard, St Bernard", "248": "Eskimo dog, husky", "249": "malamute, malemute, Alaskan malamute", "250": "Siberian husky", "251": "dalmatian, coach dog, carriage dog", "252": "affenpinscher, monkey pinscher, monkey dog", "253": "basenji", "254": "pug, pug-dog", "255": "Leonberg", "256": "Newfoundland, Newfoundland dog", "257": "Great Pyrenees", "258": "Samoyed, Samoyede", "259": "Pomeranian", "260": "chow, chow chow", "261": "keeshond", "262": "Brabancon griffon", "263": "Pembroke, Pembroke Welsh corgi", "264": "Cardigan, Cardigan Welsh corgi", "265": "toy poodle", "266": "miniature poodle", "267": "standard poodle", "268": "Mexican hairless", "269": "timber wolf, grey wolf, gray wolf, Canis lupus", "270": "white wolf, Arctic wolf, Canis lupus tundrarum", "271": "red wolf, maned wolf, Canis rufus, Canis niger", "272": "coyote, prairie wolf, brush wolf, Canis latrans", "273": "dingo, warrigal, warragal, Canis dingo", "274": "dhole, Cuon alpinus", "275": "African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276": "hyena, hyaena", "277": "red fox, Vulpes vulpes", "278": "kit fox, Vulpes macrotis", "279": "Arctic fox, white fox, Alopex lagopus", "280": "grey fox, gray fox, Urocyon cinereoargenteus", "281": "tabby, tabby cat", "282": "tiger cat", "283": "Persian cat", "284": "Siamese cat, Siamese", "285": "Egyptian cat", "286": "cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287": "lynx, catamount", "288": "leopard, Panthera pardus", "289": "snow leopard, ounce, Panthera uncia", "290": "jaguar, panther, Panthera onca, Felis onca", "291": "lion, king of beasts, Panthera leo", "292": "tiger, Panthera tigris", "293": "cheetah, chetah, Acinonyx jubatus", "294": "brown bear, bruin, Ursus arctos", "295": "American black bear, black bear, Ursus americanus, Euarctos americanus", "296": "ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297": "sloth bear, Melursus ursinus, Ursus ursinus", "298": "mongoose", "299": "meerkat, mierkat", "300": "tiger beetle", "301": "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302": "ground beetle, carabid beetle", "303": "long-horned beetle, longicorn, longicorn beetle", "304": "leaf beetle, chrysomelid", "305": "dung beetle", "306": "rhinoceros beetle", "307": "weevil", "308": "fly", "309": "bee", "310": "ant, emmet, pismire", "311": "grasshopper, hopper", "312": "cricket", "313": "walking stick, walkingstick, stick insect", "314": "cockroach, roach", "315": "mantis, mantid", "316": "cicada, cicala", "317": "leafhopper", "318": "lacewing, lacewing fly", "319": "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320": "damselfly", "321": "admiral", "322": "ringlet, ringlet butterfly", "323": "monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324": "cabbage butterfly", "325": "sulphur butterfly, sulfur butterfly", "326": "lycaenid, lycaenid butterfly", "327": "starfish, sea star", "328": "sea urchin", "329": "sea cucumber, holothurian", "330": "wood rabbit, cottontail, cottontail rabbit", "331": "hare", "332": "Angora, Angora rabbit", "333": "hamster", "334": "porcupine, hedgehog", "335": "fox squirrel, eastern fox squirrel, Sciurus niger", "336": "marmot", "337": "beaver", "338": "guinea pig, Cavia cobaya", "339": "sorrel", "340": "zebra", "341": "hog, pig, grunter, squealer, Sus scrofa", "342": "wild boar, boar, Sus scrofa", "343": "warthog", "344": "hippopotamus, hippo, river horse, Hippopotamus amphibius", "345": "ox", "346": "water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347": "bison", "348": "ram, tup", "349": "bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350": "ibex, Capra ibex", "351": "hartebeest", "352": "impala, Aepyceros melampus", "353": "gazelle", "354": "Arabian camel, dromedary, Camelus dromedarius", "355": "llama", "356": "weasel", "357": "mink", "358": "polecat, fitch, foulmart, foumart, Mustela putorius", "359": "black-footed ferret, ferret, Mustela nigripes", "360": "otter", "361": "skunk, polecat, wood pussy", "362": "badger", "363": "armadillo", "364": "three-toed sloth, ai, Bradypus tridactylus", "365": "orangutan, orang, orangutang, Pongo pygmaeus", "366": "gorilla, Gorilla gorilla", "367": "chimpanzee, chimp, Pan troglodytes", "368": "gibbon, Hylobates lar", "369": "siamang, Hylobates syndactylus, Symphalangus syndactylus", "370": "guenon, guenon monkey", "371": "patas, hussar monkey, Erythrocebus patas", "372": "baboon", "373": "macaque", "374": "langur", "375": "colobus, colobus monkey", "376": "proboscis monkey, Nasalis larvatus", "377": "marmoset", "378": "capuchin, ringtail, Cebus capucinus", "379": "howler monkey, howler", "380": "titi, titi monkey", "381": "spider monkey, Ateles geoffroyi", "382": "squirrel monkey, Saimiri sciureus", "383": "Madagascar cat, ring-tailed lemur, Lemur catta", "384": "indri, indris, Indri indri, Indri brevicaudatus", "385": "Indian elephant, Elephas maximus", "386": "African elephant, Loxodonta africana", "387": "lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388": "giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389": "barracouta, snoek", "390": "eel", "391": "coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392": "rock beauty, Holocanthus tricolor", "393": "anemone fish", "394": "sturgeon", "395": "gar, garfish, garpike, billfish, Lepisosteus osseus", "396": "lionfish", "397": "puffer, pufferfish, blowfish, globefish", "398": "abacus", "399": "abaya", "400": "academic gown, academic robe, judge's robe", "401": "accordion, piano accordion, squeeze box", "402": "acoustic guitar", "403": "aircraft carrier, carrier, flattop, attack aircraft carrier", "404": "airliner", "405": "airship, dirigible", "406": "altar", "407": "ambulance", "408": "amphibian, amphibious vehicle", "409": "analog clock", "410": "apiary, bee house", "411": "apron", "412": "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413": "assault rifle, assault gun", "414": "backpack, back pack, knapsack, packsack, rucksack, haversack", "415": "bakery, bakeshop, bakehouse", "416": "balance beam, beam", "417": "balloon", "418": "ballpoint, ballpoint pen, ballpen, Biro", "419": "Band Aid", "420": "banjo", "421": "bannister, banister, balustrade, balusters, handrail", "422": "barbell", "423": "barber chair", "424": "barbershop", "425": "barn", "426": "barometer", "427": "barrel, cask", "428": "barrow, garden cart, lawn cart, wheelbarrow", "429": "baseball", "430": "basketball", "431": "bassinet", "432": "bassoon", "433": "bathing cap, swimming cap", "434": "bath towel", "435": "bathtub, bathing tub, bath, tub", "436": "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437": "beacon, lighthouse, beacon light, pharos", "438": "beaker", "439": "bearskin, busby, shako", "440": "beer bottle", "441": "beer glass", "442": "bell cote, bell cot", "443": "bib", "444": "bicycle-built-for-two, tandem bicycle, tandem", "445": "bikini, two-piece", "446": "binder, ring-binder", "447": "binoculars, field glasses, opera glasses", "448": "birdhouse", "449": "boathouse", "450": "bobsled, bobsleigh, bob", "451": "bolo tie, bolo, bola tie, bola", "452": "bonnet, poke bonnet", "453": "bookcase", "454": "bookshop, bookstore, bookstall", "455": "bottlecap", "456": "bow", "457": "bow tie, bow-tie, bowtie", "458": "brass, memorial tablet, plaque", "459": "brassiere, bra, bandeau", "460": "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461": "breastplate, aegis, egis", "462": "broom", "463": "bucket, pail", "464": "buckle", "465": "bulletproof vest", "466": "bullet train, bullet", "467": "butcher shop, meat market", "468": "cab, hack, taxi, taxicab", "469": "caldron, cauldron", "470": "candle, taper, wax light", "471": "cannon", "472": "canoe", "473": "can opener, tin opener", "474": "cardigan", "475": "car mirror", "476": "carousel, carrousel, merry-go-round, roundabout, whirligig", "477": "carpenter's kit, tool kit", "478": "carton", "479": "car wheel", "480": "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481": "cassette", "482": "cassette player", "483": "castle", "484": "catamaran", "485": "CD player", "486": "cello, violoncello", "487": "cellular telephone, cellular phone, cellphone, cell, mobile phone", "488": "chain", "489": "chainlink fence", "490": "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491": "chain saw, chainsaw", "492": "chest", "493": "chiffonier, commode", "494": "chime, bell, gong", "495": "china cabinet, china closet", "496": "Christmas stocking", "497": "church, church building", "498": "cinema, movie theater, movie theatre, movie house, picture palace", "499": "cleaver, meat cleaver, chopper", "500": "cliff dwelling", "501": "cloak", "502": "clog, geta, patten, sabot", "503": "cocktail shaker", "504": "coffee mug", "505": "coffeepot", "506": "coil, spiral, volute, whorl, helix", "507": "combination lock", "508": "computer keyboard, keypad", "509": "confectionery, confectionary, candy store", "510": "container ship, containership, container vessel", "511": "convertible", "512": "corkscrew, bottle screw", "513": "cornet, horn, trumpet, trump", "514": "cowboy boot", "515": "cowboy hat, ten-gallon hat", "516": "cradle", "517": "crane2", "518": "crash helmet", "519": "crate", "520": "crib, cot", "521": "Crock Pot", "522": "croquet ball", "523": "crutch", "524": "cuirass", "525": "dam, dike, dyke", "526": "desk", "527": "desktop computer", "528": "dial telephone, dial phone", "529": "diaper, nappy, napkin", "530": "digital clock", "531": "digital watch", "532": "dining table, board", "533": "dishrag, dishcloth", "534": "dishwasher, dish washer, dishwashing machine", "535": "disk brake, disc brake", "536": "dock, dockage, docking facility", "537": "dogsled, dog sled, dog sleigh", "538": "dome", "539": "doormat, welcome mat", "540": "drilling platform, offshore rig", "541": "drum, membranophone, tympan", "542": "drumstick", "543": "dumbbell", "544": "Dutch oven", "545": "electric fan, blower", "546": "electric guitar", "547": "electric locomotive", "548": "entertainment center", "549": "envelope", "550": "espresso maker", "551": "face powder", "552": "feather boa, boa", "553": "file, file cabinet, filing cabinet", "554": "fireboat", "555": "fire engine, fire truck", "556": "fire screen, fireguard", "557": "flagpole, flagstaff", "558": "flute, transverse flute", "559": "folding chair", "560": "football helmet", "561": "forklift", "562": "fountain", "563": "fountain pen", "564": "four-poster", "565": "freight car", "566": "French horn, horn", "567": "frying pan, frypan, skillet", "568": "fur coat", "569": "garbage truck, dustcart", "570": "gasmask, respirator, gas helmet", "571": "gas pump, gasoline pump, petrol pump, island dispenser", "572": "goblet", "573": "go-kart", "574": "golf ball", "575": "golfcart, golf cart", "576": "gondola", "577": "gong, tam-tam", "578": "gown", "579": "grand piano, grand", "580": "greenhouse, nursery, glasshouse", "581": "grille, radiator grille", "582": "grocery store, grocery, food market, market", "583": "guillotine", "584": "hair slide", "585": "hair spray", "586": "half track", "587": "hammer", "588": "hamper", "589": "hand blower, blow dryer, blow drier, hair dryer, hair drier", "590": "hand-held computer, hand-held microcomputer", "591": "handkerchief, hankie, hanky, hankey", "592": "hard disc, hard disk, fixed disk", "593": "harmonica, mouth organ, harp, mouth harp", "594": "harp", "595": "harvester, reaper", "596": "hatchet", "597": "holster", "598": "home theater, home theatre", "599": "honeycomb", "600": "hook, claw", "601": "hoopskirt, crinoline", "602": "horizontal bar, high bar", "603": "horse cart, horse-cart", "604": "hourglass", "605": "iPod", "606": "iron, smoothing iron", "607": "jack-o'-lantern", "608": "jean, blue jean, denim", "609": "jeep, landrover", "610": "jersey, T-shirt, tee shirt", "611": "jigsaw puzzle", "612": "jinrikisha, ricksha, rickshaw", "613": "joystick", "614": "kimono", "615": "knee pad", "616": "knot", "617": "lab coat, laboratory coat", "618": "ladle", "619": "lampshade, lamp shade", "620": "laptop, laptop computer", "621": "lawn mower, mower", "622": "lens cap, lens cover", "623": "letter opener, paper knife, paperknife", "624": "library", "625": "lifeboat", "626": "lighter, light, igniter, ignitor", "627": "limousine, limo", "628": "liner, ocean liner", "629": "lipstick, lip rouge", "630": "Loafer", "631": "lotion", "632": "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633": "loupe, jeweler's loupe", "634": "lumbermill, sawmill", "635": "magnetic compass", "636": "mailbag, postbag", "637": "mailbox, letter box", "638": "maillot", "639": "maillot, tank suit", "640": "manhole cover", "641": "maraca", "642": "marimba, xylophone", "643": "mask", "644": "matchstick", "645": "maypole", "646": "maze, labyrinth", "647": "measuring cup", "648": "medicine chest, medicine cabinet", "649": "megalith, megalithic structure", "650": "microphone, mike", "651": "microwave, microwave oven", "652": "military uniform", "653": "milk can", "654": "minibus", "655": "miniskirt, mini", "656": "minivan", "657": "missile", "658": "mitten", "659": "mixing bowl", "660": "mobile home, manufactured home", "661": "Model T", "662": "modem", "663": "monastery", "664": "monitor", "665": "moped", "666": "mortar", "667": "mortarboard", "668": "mosque", "669": "mosquito net", "670": "motor scooter, scooter", "671": "mountain bike, all-terrain bike, off-roader", "672": "mountain tent", "673": "mouse, computer mouse", "674": "mousetrap", "675": "moving van", "676": "muzzle", "677": "nail", "678": "neck brace", "679": "necklace", "680": "nipple", "681": "notebook, notebook computer", "682": "obelisk", "683": "oboe, hautboy, hautbois", "684": "ocarina, sweet potato", "685": "odometer, hodometer, mileometer, milometer", "686": "oil filter", "687": "organ, pipe organ", "688": "oscilloscope, scope, cathode-ray oscilloscope, CRO", "689": "overskirt", "690": "oxcart", "691": "oxygen mask", "692": "packet", "693": "paddle, boat paddle", "694": "paddlewheel, paddle wheel", "695": "padlock", "696": "paintbrush", "697": "pajama, pyjama, pj's, jammies", "698": "palace", "699": "panpipe, pandean pipe, syrinx", "700": "paper towel", "701": "parachute, chute", "702": "parallel bars, bars", "703": "park bench", "704": "parking meter", "705": "passenger car, coach, carriage", "706": "patio, terrace", "707": "pay-phone, pay-station", "708": "pedestal, plinth, footstall", "709": "pencil box, pencil case", "710": "pencil sharpener", "711": "perfume, essence", "712": "Petri dish", "713": "photocopier", "714": "pick, plectrum, plectron", "715": "pickelhaube", "716": "picket fence, paling", "717": "pickup, pickup truck", "718": "pier", "719": "piggy bank, penny bank", "720": "pill bottle", "721": "pillow", "722": "ping-pong ball", "723": "pinwheel", "724": "pirate, pirate ship", "725": "pitcher, ewer", "726": "plane, carpenter's plane, woodworking plane", "727": "planetarium", "728": "plastic bag", "729": "plate rack", "730": "plow, plough", "731": "plunger, plumber's helper", "732": "Polaroid camera, Polaroid Land camera", "733": "pole", "734": "police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735": "poncho", "736": "pool table, billiard table, snooker table", "737": "pop bottle, soda bottle", "738": "pot, flowerpot", "739": "potter's wheel", "740": "power drill", "741": "prayer rug, prayer mat", "742": "printer", "743": "prison, prison house", "744": "projectile, missile", "745": "projector", "746": "puck, hockey puck", "747": "punching bag, punch bag, punching ball, punchball", "748": "purse", "749": "quill, quill pen", "750": "quilt, comforter, comfort, puff", "751": "racer, race car, racing car", "752": "racket, racquet", "753": "radiator", "754": "radio, wireless", "755": "radio telescope, radio reflector", "756": "rain barrel", "757": "recreational vehicle, RV, R.V.", "758": "reel", "759": "reflex camera", "760": "refrigerator, icebox", "761": "remote control, remote", "762": "restaurant, eating house, eating place, eatery", "763": "revolver, six-gun, six-shooter", "764": "rifle", "765": "rocking chair, rocker", "766": "rotisserie", "767": "rubber eraser, rubber, pencil eraser", "768": "rugby ball", "769": "rule, ruler", "770": "running shoe", "771": "safe", "772": "safety pin", "773": "saltshaker, salt shaker", "774": "sandal", "775": "sarong", "776": "sax, saxophone", "777": "scabbard", "778": "scale, weighing machine", "779": "school bus", "780": "schooner", "781": "scoreboard", "782": "screen, CRT screen", "783": "screw", "784": "screwdriver", "785": "seat belt, seatbelt", "786": "sewing machine", "787": "shield, buckler", "788": "shoe shop, shoe-shop, shoe store", "789": "shoji", "790": "shopping basket", "791": "shopping cart", "792": "shovel", "793": "shower cap", "794": "shower curtain", "795": "ski", "796": "ski mask", "797": "sleeping bag", "798": "slide rule, slipstick", "799": "sliding door", "800": "slot, one-armed bandit", "801": "snorkel", "802": "snowmobile", "803": "snowplow, snowplough", "804": "soap dispenser", "805": "soccer ball", "806": "sock", "807": "solar dish, solar collector, solar furnace", "808": "sombrero", "809": "soup bowl", "810": "space bar", "811": "space heater", "812": "space shuttle", "813": "spatula", "814": "speedboat", "815": "spider web, spider's web", "816": "spindle", "817": "sports car, sport car", "818": "spotlight, spot", "819": "stage", "820": "steam locomotive", "821": "steel arch bridge", "822": "steel drum", "823": "stethoscope", "824": "stole", "825": "stone wall", "826": "stopwatch, stop watch", "827": "stove", "828": "strainer", "829": "streetcar, tram, tramcar, trolley, trolley car", "830": "stretcher", "831": "studio couch, day bed", "832": "stupa, tope", "833": "submarine, pigboat, sub, U-boat", "834": "suit, suit of clothes", "835": "sundial", "836": "sunglass", "837": "sunglasses, dark glasses, shades", "838": "sunscreen, sunblock, sun blocker", "839": "suspension bridge", "840": "swab, swob, mop", "841": "sweatshirt", "842": "swimming trunks, bathing trunks", "843": "swing", "844": "switch, electric switch, electrical switch", "845": "syringe", "846": "table lamp", "847": "tank, army tank, armored combat vehicle, armoured combat vehicle", "848": "tape player", "849": "teapot", "850": "teddy, teddy bear", "851": "television, television system", "852": "tennis ball", "853": "thatch, thatched roof", "854": "theater curtain, theatre curtain", "855": "thimble", "856": "thresher, thrasher, threshing machine", "857": "throne", "858": "tile roof", "859": "toaster", "860": "tobacco shop, tobacconist shop, tobacconist", "861": "toilet seat", "862": "torch", "863": "totem pole", "864": "tow truck, tow car, wrecker", "865": "toyshop", "866": "tractor", "867": "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868": "tray", "869": "trench coat", "870": "tricycle, trike, velocipede", "871": "trimaran", "872": "tripod", "873": "triumphal arch", "874": "trolleybus, trolley coach, trackless trolley", "875": "trombone", "876": "tub, vat", "877": "turnstile", "878": "typewriter keyboard", "879": "umbrella", "880": "unicycle, monocycle", "881": "upright, upright piano", "882": "vacuum, vacuum cleaner", "883": "vase", "884": "vault", "885": "velvet", "886": "vending machine", "887": "vestment", "888": "viaduct", "889": "violin, fiddle", "890": "volleyball", "891": "waffle iron", "892": "wall clock", "893": "wallet, billfold, notecase, pocketbook", "894": "wardrobe, closet, press", "895": "warplane, military plane", "896": "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897": "washer, automatic washer, washing machine", "898": "water bottle", "899": "water jug", "900": "water tower", "901": "whiskey jug", "902": "whistle", "903": "wig", "904": "window screen", "905": "window shade", "906": "Windsor tie", "907": "wine bottle", "908": "wing", "909": "wok", "910": "wooden spoon", "911": "wool, woolen, woollen", "912": "worm fence, snake fence, snake-rail fence, Virginia fence", "913": "wreck", "914": "yawl", "915": "yurt", "916": "web site, website, internet site, site", "917": "comic book", "918": "crossword puzzle, crossword", "919": "street sign", "920": "traffic light, traffic signal, stoplight", "921": "book jacket, dust cover, dust jacket, dust wrapper", "922": "menu", "923": "plate", "924": "guacamole", "925": "consomme", "926": "hot pot, hotpot", "927": "trifle", "928": "ice cream, icecream", "929": "ice lolly, lolly, lollipop, popsicle", "930": "French loaf", "931": "bagel, beigel", "932": "pretzel", "933": "cheeseburger", "934": "hotdog, hot dog, red hot", "935": "mashed potato", "936": "head cabbage", "937": "broccoli", "938": "cauliflower", "939": "zucchini, courgette", "940": "spaghetti squash", "941": "acorn squash", "942": "butternut squash", "943": "cucumber, cuke", "944": "artichoke, globe artichoke", "945": "bell pepper", "946": "cardoon", "947": "mushroom", "948": "Granny Smith", "949": "strawberry", "950": "orange", "951": "lemon", "952": "fig", "953": "pineapple, ananas", "954": "banana", "955": "jackfruit, jak, jack", "956": "custard apple", "957": "pomegranate", "958": "hay", "959": "carbonara", "960": "chocolate sauce, chocolate syrup", "961": "dough", "962": "meat loaf, meatloaf", "963": "pizza, pizza pie", "964": "potpie", "965": "burrito", "966": "red wine", "967": "espresso", "968": "cup", "969": "eggnog", "970": "alp", "971": "bubble", "972": "cliff, drop, drop-off", "973": "coral reef", "974": "geyser", "975": "lakeside, lakeshore", "976": "promontory, headland, head, foreland", "977": "sandbar, sand bar", "978": "seashore, coast, seacoast, sea-coast", "979": "valley, vale", "980": "volcano", "981": "ballplayer, baseball player", "982": "groom, bridegroom", "983": "scuba diver", "984": "rapeseed", "985": "daisy", "986": "yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987": "corn", "988": "acorn", "989": "hip, rose hip, rosehip", "990": "buckeye, horse chestnut, conker", "991": "coral fungus", "992": "agaric", "993": "gyromitra", "994": "stinkhorn, carrion fungus", "995": "earthstar", "996": "hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997": "bolete", "998": "ear, spike, capitulum", "999": "toilet tissue, toilet paper, bathroom tissue"}}}}], "splits": [{"name": "test", "num_bytes": 13613661561, "num_examples": 100000}, {"name": "train", "num_bytes": 146956944242, "num_examples": 1281167}, {"name": "validation", "num_bytes": 6709003386, "num_examples": 50000}], "download_size": 166009941208, "dataset_size": 167279609189}} | 2023-12-21T20:27:21+00:00 | [
"1409.0575",
"1912.07726",
"1811.12231",
"2109.13228"
] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-other #arxiv-1409.0575 #arxiv-1912.07726 #arxiv-1811.12231 #arxiv-2109.13228 #region-us
| Dataset Card for ImageNet
=========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper: URL
* Leaderboard: URL
* Point of Contact: mailto: URL@URL
### Dataset Summary
ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated.
This dataset provides access to ImageNet (ILSVRC) 2012 which is the most commonly used subset of ImageNet. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. The version also has the patch which fixes some of the corrupted test set images already applied. For full ImageNet dataset presented in [[2]](URL please check the download section of the main website.
### Supported Tasks and Leaderboards
* 'image-classification': The goal of this task is to classify a given image into one of 1000 ImageNet classes. The leaderboard is available here.
To evaluate the 'imagenet-classification' accuracy on the test split, one must first create an account at URL. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at URL The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:
The export format is described in full in "URL" within the 2013 development kit available here: URL Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See 'imagenet2012\_labels.txt'.
### Languages
The class labels in the dataset are in English.
Dataset Structure
-----------------
### Data Instances
An example looks like below:
### Data Fields
The data instances have the following fields:
* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'.
* 'label': an 'int' classification label. -1 for 'test' set as the labels are missing.
The labels are indexed based on a sorted list of synset ids such as 'n07565083' which we automatically map to original class names. The original dataset is divided into folders based on these synset ids. To get a mapping from original synset names, use the file LOC\_synset\_mapping.txt available on Kaggle challenge page. You can also use 'dataset\_instance.features["labels"].int2str' function to get the class for a particular label index. Also note that, labels for test set are returned as -1 as they are missing.
Click here to see the full list of ImageNet class labels mapping:
### Data Splits
Dataset Creation
----------------
### Curation Rationale
The ImageNet project was inspired by two important needs in computer vision research. The first was the need to establish a clear North Star problem in computer vision. While the field enjoyed an abundance of important tasks to work on, from stereo vision to image retrieval, from 3D reconstruction to image segmentation, object categorization was recognized to be one of the most fundamental capabilities of both human and machine vision. Hence there was a growing demand for a high quality object categorization benchmark with clearly established evaluation metrics. Second, there was a critical need for more data to enable more generalizable machine learning methods. Ever since the birth of the digital era and the availability of web-scale data exchanges, researchers in these fields have been working hard to design more and more sophisticated algorithms to index, retrieve, organize and annotate multimedia data. But good research requires good resources. To tackle this problem at scale (think of your growing personal collection of digital images, or videos, or a commercial web search engine’s database), it was critical to provide researchers with a large-scale image database for both training and testing. The convergence of these two intellectual reasons motivated us to build ImageNet.
### Source Data
#### Initial Data Collection and Normalization
Initial data for ImageNet image classification task consists of photographs collected from Flickr and other search engines, manually labeled with the presence of one of 1000 object categories. Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs 1. The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow.
#### Who are the source language producers?
WordNet synsets further quality controlled by human annotators. The images are from Flickr.
### Annotations
#### Annotation process
The annotation process of collecting ImageNet for image classification task is a three step process.
1. Defining the 1000 object categories for the image classification task. These categories have evolved over the years.
2. Collecting the candidate image for these object categories using a search engine.
3. Quality control on the candidate images by using human annotators on Amazon Mechanical Turk (AMT) to make sure the image has the synset it was collected for.
See the section 3.1 in 1 for more details on data collection procedure and 2 for general information on ImageNet.
#### Who are the annotators?
Images are automatically fetched from an image search engine based on the synsets and filtered using human annotators on Amazon Mechanical Turk. See 1 for more details.
### Personal and Sensitive Information
The 1,000 categories selected for this subset contain only 3 people categories (scuba diver, bridegroom, and baseball player) while the full ImageNet contains 2,832 people categories under the person subtree (accounting for roughly 8.3% of the total images). This subset does contain the images of people without their consent. Though, the study in [[1]](URL on obfuscating faces of the people in the ImageNet 2012 subset shows that blurring people's faces causes a very minor decrease in accuracy (~0.6%) suggesting that privacy-aware models can be trained on ImageNet. On larger ImageNet, there has been an attempt at filtering and balancing the people subtree in the larger ImageNet.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The ImageNet dataset has been very crucial in advancement of deep learning technology as being the standard benchmark for the computer vision models. The dataset aims to probe models on their understanding of the objects and has become the de-facto dataset for this purpose. ImageNet is still one of the major datasets on which models are evaluated for their generalization in computer vision capabilities as the field moves towards self-supervised algorithms. Please see the future section in 1 for a discussion on social impact of the dataset.
### Discussion of Biases
1. A study of the history of the multiple layers (taxonomy, object classes and labeling) of ImageNet and WordNet in 2019 described how bias is deeply embedded in most classification approaches for of all sorts of images.
2. A study has also shown that ImageNet trained models are biased towards texture rather than shapes which in contrast with how humans do object classification. Increasing the shape bias improves the accuracy and robustness.
3. Another study more potential issues and biases with the ImageNet dataset and provides an alternative benchmark for image classification task. The data collected contains humans without their consent.
4. ImageNet data with face obfuscation is also provided at this link
5. A study on genealogy of ImageNet is can be found at this link about the "norms, values, and assumptions" in ImageNet.
6. See this study on filtering and balancing the distribution of people subtree in the larger complete ImageNet.
### Other Known Limitations
1. Since most of the images were collected from internet, keep in mind that some images in ImageNet might be subject to copyrights. See the following papers for more details: [[1]](URL [[2]](URL [[3]](URL
Additional Information
----------------------
### Dataset Curators
Authors of [[1]](URL and [[2]](URL
* Olga Russakovsky
* Jia Deng
* Hao Su
* Jonathan Krause
* Sanjeev Satheesh
* Wei Dong
* Richard Socher
* Li-Jia Li
* Kai Li
* Sean Ma
* Zhiheng Huang
* Andrej Karpathy
* Aditya Khosla
* Michael Bernstein
* Alexander C Berg
* Li Fei-Fei
### Licensing Information
In exchange for permission to use the ImageNet database (the "Database") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
7. The law of the State of New Jersey shall apply to all disputes under this agreement.
### Contributions
Thanks to @apsdehal for adding this dataset.
| [
"### Dataset Summary\n\n\nILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a \"synonym set\" or \"synset\". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated.\n\n\nThis dataset provides access to ImageNet (ILSVRC) 2012 which is the most commonly used subset of ImageNet. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. The version also has the patch which fixes some of the corrupted test set images already applied. For full ImageNet dataset presented in [[2]](URL please check the download section of the main website.",
"### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image into one of 1000 ImageNet classes. The leaderboard is available here.\n\n\nTo evaluate the 'imagenet-classification' accuracy on the test split, one must first create an account at URL. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at URL The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is \"Classification submission (top-5 cls error)\". A sample of an exported text file looks like the following:\n\n\nThe export format is described in full in \"URL\" within the 2013 development kit available here: URL Please see the section entitled \"3.3 CLS-LOC submission format\". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See 'imagenet2012\\_labels.txt'.",
"### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example looks like below:",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'label': an 'int' classification label. -1 for 'test' set as the labels are missing.\n\n\nThe labels are indexed based on a sorted list of synset ids such as 'n07565083' which we automatically map to original class names. The original dataset is divided into folders based on these synset ids. To get a mapping from original synset names, use the file LOC\\_synset\\_mapping.txt available on Kaggle challenge page. You can also use 'dataset\\_instance.features[\"labels\"].int2str' function to get the class for a particular label index. Also note that, labels for test set are returned as -1 as they are missing.\n\n\n\n\n Click here to see the full list of ImageNet class labels mapping:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe ImageNet project was inspired by two important needs in computer vision research. The first was the need to establish a clear North Star problem in computer vision. While the field enjoyed an abundance of important tasks to work on, from stereo vision to image retrieval, from 3D reconstruction to image segmentation, object categorization was recognized to be one of the most fundamental capabilities of both human and machine vision. Hence there was a growing demand for a high quality object categorization benchmark with clearly established evaluation metrics. Second, there was a critical need for more data to enable more generalizable machine learning methods. Ever since the birth of the digital era and the availability of web-scale data exchanges, researchers in these fields have been working hard to design more and more sophisticated algorithms to index, retrieve, organize and annotate multimedia data. But good research requires good resources. To tackle this problem at scale (think of your growing personal collection of digital images, or videos, or a commercial web search engine’s database), it was critical to provide researchers with a large-scale image database for both training and testing. The convergence of these two intellectual reasons motivated us to build ImageNet.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nInitial data for ImageNet image classification task consists of photographs collected from Flickr and other search engines, manually labeled with the presence of one of 1000 object categories. Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs 1. The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow.",
"#### Who are the source language producers?\n\n\nWordNet synsets further quality controlled by human annotators. The images are from Flickr.",
"### Annotations",
"#### Annotation process\n\n\nThe annotation process of collecting ImageNet for image classification task is a three step process.\n\n\n1. Defining the 1000 object categories for the image classification task. These categories have evolved over the years.\n2. Collecting the candidate image for these object categories using a search engine.\n3. Quality control on the candidate images by using human annotators on Amazon Mechanical Turk (AMT) to make sure the image has the synset it was collected for.\n\n\nSee the section 3.1 in 1 for more details on data collection procedure and 2 for general information on ImageNet.",
"#### Who are the annotators?\n\n\nImages are automatically fetched from an image search engine based on the synsets and filtered using human annotators on Amazon Mechanical Turk. See 1 for more details.",
"### Personal and Sensitive Information\n\n\nThe 1,000 categories selected for this subset contain only 3 people categories (scuba diver, bridegroom, and baseball player) while the full ImageNet contains 2,832 people categories under the person subtree (accounting for roughly 8.3% of the total images). This subset does contain the images of people without their consent. Though, the study in [[1]](URL on obfuscating faces of the people in the ImageNet 2012 subset shows that blurring people's faces causes a very minor decrease in accuracy (~0.6%) suggesting that privacy-aware models can be trained on ImageNet. On larger ImageNet, there has been an attempt at filtering and balancing the people subtree in the larger ImageNet.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe ImageNet dataset has been very crucial in advancement of deep learning technology as being the standard benchmark for the computer vision models. The dataset aims to probe models on their understanding of the objects and has become the de-facto dataset for this purpose. ImageNet is still one of the major datasets on which models are evaluated for their generalization in computer vision capabilities as the field moves towards self-supervised algorithms. Please see the future section in 1 for a discussion on social impact of the dataset.",
"### Discussion of Biases\n\n\n1. A study of the history of the multiple layers (taxonomy, object classes and labeling) of ImageNet and WordNet in 2019 described how bias is deeply embedded in most classification approaches for of all sorts of images.\n2. A study has also shown that ImageNet trained models are biased towards texture rather than shapes which in contrast with how humans do object classification. Increasing the shape bias improves the accuracy and robustness.\n3. Another study more potential issues and biases with the ImageNet dataset and provides an alternative benchmark for image classification task. The data collected contains humans without their consent.\n4. ImageNet data with face obfuscation is also provided at this link\n5. A study on genealogy of ImageNet is can be found at this link about the \"norms, values, and assumptions\" in ImageNet.\n6. See this study on filtering and balancing the distribution of people subtree in the larger complete ImageNet.",
"### Other Known Limitations\n\n\n1. Since most of the images were collected from internet, keep in mind that some images in ImageNet might be subject to copyrights. See the following papers for more details: [[1]](URL [[2]](URL [[3]](URL\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nAuthors of [[1]](URL and [[2]](URL\n\n\n* Olga Russakovsky\n* Jia Deng\n* Hao Su\n* Jonathan Krause\n* Sanjeev Satheesh\n* Wei Dong\n* Richard Socher\n* Li-Jia Li\n* Kai Li\n* Sean Ma\n* Zhiheng Huang\n* Andrej Karpathy\n* Aditya Khosla\n* Michael Bernstein\n* Alexander C Berg\n* Li Fei-Fei",
"### Licensing Information\n\n\nIn exchange for permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:\n\n\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement.",
"### Contributions\n\n\nThanks to @apsdehal for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-other #arxiv-1409.0575 #arxiv-1912.07726 #arxiv-1811.12231 #arxiv-2109.13228 #region-us \n",
"### Dataset Summary\n\n\nILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a \"synonym set\" or \"synset\". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated.\n\n\nThis dataset provides access to ImageNet (ILSVRC) 2012 which is the most commonly used subset of ImageNet. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. The version also has the patch which fixes some of the corrupted test set images already applied. For full ImageNet dataset presented in [[2]](URL please check the download section of the main website.",
"### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image into one of 1000 ImageNet classes. The leaderboard is available here.\n\n\nTo evaluate the 'imagenet-classification' accuracy on the test split, one must first create an account at URL. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at URL The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is \"Classification submission (top-5 cls error)\". A sample of an exported text file looks like the following:\n\n\nThe export format is described in full in \"URL\" within the 2013 development kit available here: URL Please see the section entitled \"3.3 CLS-LOC submission format\". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See 'imagenet2012\\_labels.txt'.",
"### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example looks like below:",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'label': an 'int' classification label. -1 for 'test' set as the labels are missing.\n\n\nThe labels are indexed based on a sorted list of synset ids such as 'n07565083' which we automatically map to original class names. The original dataset is divided into folders based on these synset ids. To get a mapping from original synset names, use the file LOC\\_synset\\_mapping.txt available on Kaggle challenge page. You can also use 'dataset\\_instance.features[\"labels\"].int2str' function to get the class for a particular label index. Also note that, labels for test set are returned as -1 as they are missing.\n\n\n\n\n Click here to see the full list of ImageNet class labels mapping:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe ImageNet project was inspired by two important needs in computer vision research. The first was the need to establish a clear North Star problem in computer vision. While the field enjoyed an abundance of important tasks to work on, from stereo vision to image retrieval, from 3D reconstruction to image segmentation, object categorization was recognized to be one of the most fundamental capabilities of both human and machine vision. Hence there was a growing demand for a high quality object categorization benchmark with clearly established evaluation metrics. Second, there was a critical need for more data to enable more generalizable machine learning methods. Ever since the birth of the digital era and the availability of web-scale data exchanges, researchers in these fields have been working hard to design more and more sophisticated algorithms to index, retrieve, organize and annotate multimedia data. But good research requires good resources. To tackle this problem at scale (think of your growing personal collection of digital images, or videos, or a commercial web search engine’s database), it was critical to provide researchers with a large-scale image database for both training and testing. The convergence of these two intellectual reasons motivated us to build ImageNet.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nInitial data for ImageNet image classification task consists of photographs collected from Flickr and other search engines, manually labeled with the presence of one of 1000 object categories. Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs 1. The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow.",
"#### Who are the source language producers?\n\n\nWordNet synsets further quality controlled by human annotators. The images are from Flickr.",
"### Annotations",
"#### Annotation process\n\n\nThe annotation process of collecting ImageNet for image classification task is a three step process.\n\n\n1. Defining the 1000 object categories for the image classification task. These categories have evolved over the years.\n2. Collecting the candidate image for these object categories using a search engine.\n3. Quality control on the candidate images by using human annotators on Amazon Mechanical Turk (AMT) to make sure the image has the synset it was collected for.\n\n\nSee the section 3.1 in 1 for more details on data collection procedure and 2 for general information on ImageNet.",
"#### Who are the annotators?\n\n\nImages are automatically fetched from an image search engine based on the synsets and filtered using human annotators on Amazon Mechanical Turk. See 1 for more details.",
"### Personal and Sensitive Information\n\n\nThe 1,000 categories selected for this subset contain only 3 people categories (scuba diver, bridegroom, and baseball player) while the full ImageNet contains 2,832 people categories under the person subtree (accounting for roughly 8.3% of the total images). This subset does contain the images of people without their consent. Though, the study in [[1]](URL on obfuscating faces of the people in the ImageNet 2012 subset shows that blurring people's faces causes a very minor decrease in accuracy (~0.6%) suggesting that privacy-aware models can be trained on ImageNet. On larger ImageNet, there has been an attempt at filtering and balancing the people subtree in the larger ImageNet.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe ImageNet dataset has been very crucial in advancement of deep learning technology as being the standard benchmark for the computer vision models. The dataset aims to probe models on their understanding of the objects and has become the de-facto dataset for this purpose. ImageNet is still one of the major datasets on which models are evaluated for their generalization in computer vision capabilities as the field moves towards self-supervised algorithms. Please see the future section in 1 for a discussion on social impact of the dataset.",
"### Discussion of Biases\n\n\n1. A study of the history of the multiple layers (taxonomy, object classes and labeling) of ImageNet and WordNet in 2019 described how bias is deeply embedded in most classification approaches for of all sorts of images.\n2. A study has also shown that ImageNet trained models are biased towards texture rather than shapes which in contrast with how humans do object classification. Increasing the shape bias improves the accuracy and robustness.\n3. Another study more potential issues and biases with the ImageNet dataset and provides an alternative benchmark for image classification task. The data collected contains humans without their consent.\n4. ImageNet data with face obfuscation is also provided at this link\n5. A study on genealogy of ImageNet is can be found at this link about the \"norms, values, and assumptions\" in ImageNet.\n6. See this study on filtering and balancing the distribution of people subtree in the larger complete ImageNet.",
"### Other Known Limitations\n\n\n1. Since most of the images were collected from internet, keep in mind that some images in ImageNet might be subject to copyrights. See the following papers for more details: [[1]](URL [[2]](URL [[3]](URL\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nAuthors of [[1]](URL and [[2]](URL\n\n\n* Olga Russakovsky\n* Jia Deng\n* Hao Su\n* Jonathan Krause\n* Sanjeev Satheesh\n* Wei Dong\n* Richard Socher\n* Li-Jia Li\n* Kai Li\n* Sean Ma\n* Zhiheng Huang\n* Andrej Karpathy\n* Aditya Khosla\n* Michael Bernstein\n* Alexander C Berg\n* Li Fei-Fei",
"### Licensing Information\n\n\nIn exchange for permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:\n\n\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement.",
"### Contributions\n\n\nThanks to @apsdehal for adding this dataset."
] |
e387ddf167f3eba99936cff89909ed6264f17e1f |
# Dataset Card for Czech Subjectivity Dataset
### Dataset Summary
Czech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description https://arxiv.org/abs/2204.13915
### Github
https://github.com/pauli31/czech-subjectivity-dataset
### Supported Tasks and Leaderboards
Subjectivity Analysis
### Languages
Czech
### Data Instances
train/dev/test
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
If you use our dataset or software for academic research, please cite the our [paper](https://arxiv.org/abs/2204.13915)
```
@article{pib2022czech,
title={Czech Dataset for Cross-lingual Subjectivity Classification},
author={Pavel Přibáň and Josef Steinberger},
year={2022},
eprint={2204.13915},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contact
[email protected]
### Contributions
Thanks to [@pauli31](https://github.com/pauli31) for adding this dataset. | pauli31/czech-subjectivity-dataset | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-4.0",
"arxiv:2204.13915",
"region:us"
] | 2022-05-02T17:27:17+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["cs"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Czech Subjectivity Dataset"} | 2024-01-05T20:05:40+00:00 | [
"2204.13915"
] | [
"cs"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Czech #license-cc-by-nc-sa-4.0 #arxiv-2204.13915 #region-us
|
# Dataset Card for Czech Subjectivity Dataset
### Dataset Summary
Czech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description URL
### Github
URL
### Supported Tasks and Leaderboards
Subjectivity Analysis
### Languages
Czech
### Data Instances
train/dev/test
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
If you use our dataset or software for academic research, please cite the our paper
### Contact
pribanp@URL
### Contributions
Thanks to @pauli31 for adding this dataset. | [
"# Dataset Card for Czech Subjectivity Dataset",
"### Dataset Summary\n\nCzech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description URL",
"### Github\nURL",
"### Supported Tasks and Leaderboards\n\nSubjectivity Analysis",
"### Languages\n\nCzech",
"### Data Instances\n\ntrain/dev/test",
"### Licensing Information\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\n\n\nIf you use our dataset or software for academic research, please cite the our paper",
"### Contact\npribanp@URL",
"### Contributions\n\nThanks to @pauli31 for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Czech #license-cc-by-nc-sa-4.0 #arxiv-2204.13915 #region-us \n",
"# Dataset Card for Czech Subjectivity Dataset",
"### Dataset Summary\n\nCzech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description URL",
"### Github\nURL",
"### Supported Tasks and Leaderboards\n\nSubjectivity Analysis",
"### Languages\n\nCzech",
"### Data Instances\n\ntrain/dev/test",
"### Licensing Information\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\n\n\nIf you use our dataset or software for academic research, please cite the our paper",
"### Contact\npribanp@URL",
"### Contributions\n\nThanks to @pauli31 for adding this dataset."
] |
ffd50dace84cd36afc994d9193d988b30c200a5b |
# Dataset Card for Text-based NP Enrichment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://yanaiela.github.io/TNE/
- **Repository:** https://github.com/yanaiela/TNE
- **Paper:** https://arxiv.org/abs/2109.12085
- **Leaderboard:** [TNE OOD](https://leaderboard.allenai.org/tne-ood/submissions/public)
[TNE](https://leaderboard.allenai.org/tne/submissions/public)
- **Point of Contact:** [Yanai Elazar](mailto:[email protected])
### Dataset Summary
Text-based NP Enrichment (TNE) is a natural language understanding (NLU) task, which focus on relations between noun phrases (NPs) that can be mediated via prepositions. The dataset contains 5,497 documents, annotated exhaustively with all possible links between the NPs in each document.
The main data comes from WikiNews, which is used for train/dev/test. We also collected an additional set of 509 documents to serve as out of distribution (OOD) data points, from the Book Corpus, IMDB reviews and Reddit.
### Supported Tasks and Leaderboards
The data contain both the main data for the TNE task, as well as coreference resolution data.
There are two leaderboards for the TNE data, one for the standard test set, and another one for the OOD test set:
- [TNE Leaderboard](https://leaderboard.allenai.org/tne/submissions/public)
- [TNE OOD Leaderboard](https://leaderboard.allenai.org/tne-ood/submissions/public)
### Languages
The text in the dataset is in English, as spoken in the different domains we include. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
The original files are in a jsonl format, containing a dictionary of a single document, in each line.
Each document contain a different amount of labels, due to the different amount of NPs.
The test and ood splits come without the annotated labels.
### Data Fields
A document consists of:
* `id`: a unique identifier of a document, beginning with `r` and followed by a number
* `text`: the text of the document. The title and subtitles (if exists) are separated with two new lines. The paragraphs
are separated by a single new line.
* `tokens`: a list of string, containing the tokenized tokens
* `nps`: a list of dictionaries, containing the following entries:
* `text`: the text of the np
* `start_index`: an integer indicating the starting index in the text
* `end_index`: an integer indicating the ending index in the text
* `start_token`: an integer indicating the first token of the np out of the tokenized tokens
* `end_token`: an integer indicating the last token of the np out of the tokenized tokens
* `id`: the id of the np
* `np_relations`: these are the relation labels of the document. It is a list of dictionaries, where each
dictionary contains:
* `anchor`: the id of the anchor np
* `complement`: the id of the complement np
* `preposition`: the preposition that links between the anchor and the complement. This can take one out of 24 pre-defined preposition (23 + member(s)-of)
* `complement_coref_cluster_id`: the coreference id, which the complement is part of.
* `coref`: the coreference labels. It contains a list of dictionaries, where each dictionary contains:
* `id`: the id of the coreference cluster
* `members`: the ids of the nps members of such cluster
* `np_type`: the type of cluster. It can be either
* `standard`: regular coreference cluster
* `time/date/measurement`: a time / date / measurement np. These will be singletons.
* `idiomatic`: an idiomatic expression
* `metadata`: metadata of the document. It contains the following:
* `annotators`: a dictionary with anonymized annotators id
* `coref_worker`: the coreference worker id
* `consolidator_worker`: the consolidator worker id
* `np-relations_worker`: the np relations worker id
* `url`: the url where the document was taken from (not always existing)
* `source`: the original file name where the document was taken from
### Data Splits
The dataset is spread across four files, for the four different splits: train, dev, test and test_ood.
Additional details on the data statistics can be found in the [paper](https://arxiv.org/abs/2109.12085)
## Dataset Creation
### Curation Rationale
TNE was build as a new task for language understanding, focusing on extracting relations between nouns, moderated by prepositions.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was created by Yanai Elazar, Victoria Basmov, Yoav Goldberg, Reut Tsarfaty, during work done at Bar-Ilan University, and AI2.
### Licensing Information
The data is released under the MIT license.
### Citation Information
```bibtex
@article{tne,
author = {Elazar, Yanai and Basmov, Victoria and Goldberg, Yoav and Tsarfaty, Reut},
title = "{Text-based NP Enrichment}",
journal = {Transactions of the Association for Computational Linguistics},
year = {2022},
}
```
### Contributions
Thanks to [@yanaiela](https://github.com/yanaiela), who is also the first author of the paper, for adding this dataset. | tne | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2109.12085",
"region:us"
] | 2022-05-03T13:06:32+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Text-based NP Enrichment", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "nps", "list": [{"name": "text", "dtype": "string"}, {"name": "first_char", "dtype": "int32"}, {"name": "last_char", "dtype": "int32"}, {"name": "first_token", "dtype": "int32"}, {"name": "last_token", "dtype": "int32"}, {"name": "id", "dtype": "string"}]}, {"name": "np_relations", "list": [{"name": "anchor", "dtype": "string"}, {"name": "complement", "dtype": "string"}, {"name": "preposition", "dtype": {"class_label": {"names": {"0": "about", "1": "for", "2": "with", "3": "from", "4": "among", "5": "by", "6": "on", "7": "at", "8": "during", "9": "of", "10": "member(s) of", "11": "in", "12": "after", "13": "under", "14": "to", "15": "into", "16": "before", "17": "near", "18": "outside", "19": "around", "20": "between", "21": "against", "22": "over", "23": "inside"}}}}, {"name": "complement_coref_cluster_id", "dtype": "string"}]}, {"name": "coref", "list": [{"name": "id", "dtype": "string"}, {"name": "members", "sequence": "string"}, {"name": "np_type", "dtype": {"class_label": {"names": {"0": "standard", "1": "time/date/measurement", "2": "idiomatic"}}}}]}, {"name": "metadata", "struct": [{"name": "annotators", "struct": [{"name": "coref_worker", "dtype": "int32"}, {"name": "consolidator_worker", "dtype": "int32"}, {"name": "np-relations_worker", "sequence": "int32"}]}, {"name": "url", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 41308170, "num_examples": 3988}, {"name": "validation", "num_bytes": 5495419, "num_examples": 500}, {"name": "test", "num_bytes": 2203716, "num_examples": 500}, {"name": "test_ood", "num_bytes": 2249352, "num_examples": 509}], "download_size": 14194578, "dataset_size": 51256657}} | 2024-01-18T11:19:07+00:00 | [
"2109.12085"
] | [
"en"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-2109.12085 #region-us
|
# Dataset Card for Text-based NP Enrichment
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: TNE OOD
TNE
- Point of Contact: Yanai Elazar
### Dataset Summary
Text-based NP Enrichment (TNE) is a natural language understanding (NLU) task, which focus on relations between noun phrases (NPs) that can be mediated via prepositions. The dataset contains 5,497 documents, annotated exhaustively with all possible links between the NPs in each document.
The main data comes from WikiNews, which is used for train/dev/test. We also collected an additional set of 509 documents to serve as out of distribution (OOD) data points, from the Book Corpus, IMDB reviews and Reddit.
### Supported Tasks and Leaderboards
The data contain both the main data for the TNE task, as well as coreference resolution data.
There are two leaderboards for the TNE data, one for the standard test set, and another one for the OOD test set:
- TNE Leaderboard
- TNE OOD Leaderboard
### Languages
The text in the dataset is in English, as spoken in the different domains we include. The associated BCP-47 code is 'en'.
## Dataset Structure
### Data Instances
The original files are in a jsonl format, containing a dictionary of a single document, in each line.
Each document contain a different amount of labels, due to the different amount of NPs.
The test and ood splits come without the annotated labels.
### Data Fields
A document consists of:
* 'id': a unique identifier of a document, beginning with 'r' and followed by a number
* 'text': the text of the document. The title and subtitles (if exists) are separated with two new lines. The paragraphs
are separated by a single new line.
* 'tokens': a list of string, containing the tokenized tokens
* 'nps': a list of dictionaries, containing the following entries:
* 'text': the text of the np
* 'start_index': an integer indicating the starting index in the text
* 'end_index': an integer indicating the ending index in the text
* 'start_token': an integer indicating the first token of the np out of the tokenized tokens
* 'end_token': an integer indicating the last token of the np out of the tokenized tokens
* 'id': the id of the np
* 'np_relations': these are the relation labels of the document. It is a list of dictionaries, where each
dictionary contains:
* 'anchor': the id of the anchor np
* 'complement': the id of the complement np
* 'preposition': the preposition that links between the anchor and the complement. This can take one out of 24 pre-defined preposition (23 + member(s)-of)
* 'complement_coref_cluster_id': the coreference id, which the complement is part of.
* 'coref': the coreference labels. It contains a list of dictionaries, where each dictionary contains:
* 'id': the id of the coreference cluster
* 'members': the ids of the nps members of such cluster
* 'np_type': the type of cluster. It can be either
* 'standard': regular coreference cluster
* 'time/date/measurement': a time / date / measurement np. These will be singletons.
* 'idiomatic': an idiomatic expression
* 'metadata': metadata of the document. It contains the following:
* 'annotators': a dictionary with anonymized annotators id
* 'coref_worker': the coreference worker id
* 'consolidator_worker': the consolidator worker id
* 'np-relations_worker': the np relations worker id
* 'url': the url where the document was taken from (not always existing)
* 'source': the original file name where the document was taken from
### Data Splits
The dataset is spread across four files, for the four different splits: train, dev, test and test_ood.
Additional details on the data statistics can be found in the paper
## Dataset Creation
### Curation Rationale
TNE was build as a new task for language understanding, focusing on extracting relations between nouns, moderated by prepositions.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset was created by Yanai Elazar, Victoria Basmov, Yoav Goldberg, Reut Tsarfaty, during work done at Bar-Ilan University, and AI2.
### Licensing Information
The data is released under the MIT license.
### Contributions
Thanks to @yanaiela, who is also the first author of the paper, for adding this dataset. | [
"# Dataset Card for Text-based NP Enrichment",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: TNE OOD\nTNE\n- Point of Contact: Yanai Elazar",
"### Dataset Summary\n\nText-based NP Enrichment (TNE) is a natural language understanding (NLU) task, which focus on relations between noun phrases (NPs) that can be mediated via prepositions. The dataset contains 5,497 documents, annotated exhaustively with all possible links between the NPs in each document.\n\nThe main data comes from WikiNews, which is used for train/dev/test. We also collected an additional set of 509 documents to serve as out of distribution (OOD) data points, from the Book Corpus, IMDB reviews and Reddit.",
"### Supported Tasks and Leaderboards\n\nThe data contain both the main data for the TNE task, as well as coreference resolution data.\nThere are two leaderboards for the TNE data, one for the standard test set, and another one for the OOD test set:\n- TNE Leaderboard\n- TNE OOD Leaderboard",
"### Languages\n\nThe text in the dataset is in English, as spoken in the different domains we include. The associated BCP-47 code is 'en'.",
"## Dataset Structure",
"### Data Instances\n\nThe original files are in a jsonl format, containing a dictionary of a single document, in each line.\nEach document contain a different amount of labels, due to the different amount of NPs.\nThe test and ood splits come without the annotated labels.",
"### Data Fields\n\nA document consists of:\n\n* 'id': a unique identifier of a document, beginning with 'r' and followed by a number\n* 'text': the text of the document. The title and subtitles (if exists) are separated with two new lines. The paragraphs\nare separated by a single new line.\n* 'tokens': a list of string, containing the tokenized tokens\n* 'nps': a list of dictionaries, containing the following entries:\n * 'text': the text of the np\n * 'start_index': an integer indicating the starting index in the text\n * 'end_index': an integer indicating the ending index in the text\n * 'start_token': an integer indicating the first token of the np out of the tokenized tokens\n * 'end_token': an integer indicating the last token of the np out of the tokenized tokens\n * 'id': the id of the np\n* 'np_relations': these are the relation labels of the document. It is a list of dictionaries, where each\ndictionary contains:\n * 'anchor': the id of the anchor np\n * 'complement': the id of the complement np\n * 'preposition': the preposition that links between the anchor and the complement. This can take one out of 24 pre-defined preposition (23 + member(s)-of)\n * 'complement_coref_cluster_id': the coreference id, which the complement is part of.\n* 'coref': the coreference labels. It contains a list of dictionaries, where each dictionary contains:\n * 'id': the id of the coreference cluster\n * 'members': the ids of the nps members of such cluster\n * 'np_type': the type of cluster. It can be either \n * 'standard': regular coreference cluster\n * 'time/date/measurement': a time / date / measurement np. These will be singletons.\n * 'idiomatic': an idiomatic expression\n* 'metadata': metadata of the document. It contains the following:\n * 'annotators': a dictionary with anonymized annotators id\n * 'coref_worker': the coreference worker id\n * 'consolidator_worker': the consolidator worker id\n * 'np-relations_worker': the np relations worker id\n * 'url': the url where the document was taken from (not always existing)\n * 'source': the original file name where the document was taken from",
"### Data Splits\n\nThe dataset is spread across four files, for the four different splits: train, dev, test and test_ood.\nAdditional details on the data statistics can be found in the paper",
"## Dataset Creation",
"### Curation Rationale\n\nTNE was build as a new task for language understanding, focusing on extracting relations between nouns, moderated by prepositions.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Yanai Elazar, Victoria Basmov, Yoav Goldberg, Reut Tsarfaty, during work done at Bar-Ilan University, and AI2.",
"### Licensing Information\n\nThe data is released under the MIT license.",
"### Contributions\n\nThanks to @yanaiela, who is also the first author of the paper, for adding this dataset."
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-2109.12085 #region-us \n",
"# Dataset Card for Text-based NP Enrichment",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: TNE OOD\nTNE\n- Point of Contact: Yanai Elazar",
"### Dataset Summary\n\nText-based NP Enrichment (TNE) is a natural language understanding (NLU) task, which focus on relations between noun phrases (NPs) that can be mediated via prepositions. The dataset contains 5,497 documents, annotated exhaustively with all possible links between the NPs in each document.\n\nThe main data comes from WikiNews, which is used for train/dev/test. We also collected an additional set of 509 documents to serve as out of distribution (OOD) data points, from the Book Corpus, IMDB reviews and Reddit.",
"### Supported Tasks and Leaderboards\n\nThe data contain both the main data for the TNE task, as well as coreference resolution data.\nThere are two leaderboards for the TNE data, one for the standard test set, and another one for the OOD test set:\n- TNE Leaderboard\n- TNE OOD Leaderboard",
"### Languages\n\nThe text in the dataset is in English, as spoken in the different domains we include. The associated BCP-47 code is 'en'.",
"## Dataset Structure",
"### Data Instances\n\nThe original files are in a jsonl format, containing a dictionary of a single document, in each line.\nEach document contain a different amount of labels, due to the different amount of NPs.\nThe test and ood splits come without the annotated labels.",
"### Data Fields\n\nA document consists of:\n\n* 'id': a unique identifier of a document, beginning with 'r' and followed by a number\n* 'text': the text of the document. The title and subtitles (if exists) are separated with two new lines. The paragraphs\nare separated by a single new line.\n* 'tokens': a list of string, containing the tokenized tokens\n* 'nps': a list of dictionaries, containing the following entries:\n * 'text': the text of the np\n * 'start_index': an integer indicating the starting index in the text\n * 'end_index': an integer indicating the ending index in the text\n * 'start_token': an integer indicating the first token of the np out of the tokenized tokens\n * 'end_token': an integer indicating the last token of the np out of the tokenized tokens\n * 'id': the id of the np\n* 'np_relations': these are the relation labels of the document. It is a list of dictionaries, where each\ndictionary contains:\n * 'anchor': the id of the anchor np\n * 'complement': the id of the complement np\n * 'preposition': the preposition that links between the anchor and the complement. This can take one out of 24 pre-defined preposition (23 + member(s)-of)\n * 'complement_coref_cluster_id': the coreference id, which the complement is part of.\n* 'coref': the coreference labels. It contains a list of dictionaries, where each dictionary contains:\n * 'id': the id of the coreference cluster\n * 'members': the ids of the nps members of such cluster\n * 'np_type': the type of cluster. It can be either \n * 'standard': regular coreference cluster\n * 'time/date/measurement': a time / date / measurement np. These will be singletons.\n * 'idiomatic': an idiomatic expression\n* 'metadata': metadata of the document. It contains the following:\n * 'annotators': a dictionary with anonymized annotators id\n * 'coref_worker': the coreference worker id\n * 'consolidator_worker': the consolidator worker id\n * 'np-relations_worker': the np relations worker id\n * 'url': the url where the document was taken from (not always existing)\n * 'source': the original file name where the document was taken from",
"### Data Splits\n\nThe dataset is spread across four files, for the four different splits: train, dev, test and test_ood.\nAdditional details on the data statistics can be found in the paper",
"## Dataset Creation",
"### Curation Rationale\n\nTNE was build as a new task for language understanding, focusing on extracting relations between nouns, moderated by prepositions.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Yanai Elazar, Victoria Basmov, Yoav Goldberg, Reut Tsarfaty, during work done at Bar-Ilan University, and AI2.",
"### Licensing Information\n\nThe data is released under the MIT license.",
"### Contributions\n\nThanks to @yanaiela, who is also the first author of the paper, for adding this dataset."
] |
8b6e3f5b1dbd7244ae8251f5b1dc02da9146a04b |
# esCorpius Multilingual Raw
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
# Usage
```
dataset = load_dataset('LHF/escorpius-m', split='train', streaming=True)
```
# Intended use
This corpus is the *raw version* of the esCorpius-m corpus. This corpus can be used for benchmarking deduplication tools.
## Other corpora
- esCorpius multilingual corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius-m
- esCorpius original *Spanish-only* corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius
## Citation
Link to paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147
Cite this work:
```
@inproceedings{gutierrezfandino22_iberspeech,
author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas},
title={{esCorpius: A Massive Spanish Crawling Corpus}},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
year=2022,
booktitle={Proc. IberSPEECH 2022},
pages={126--130},
doi={10.21437/IberSPEECH.2022-26}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
| LHF/escorpius-mr | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:multilingual",
"size_categories:100B<n<1T",
"source_datasets:original",
"language:af",
"language:ar",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:hi",
"language:hr",
"language:it",
"language:ja",
"language:ko",
"language:mt",
"language:nl",
"language:no",
"language:oc",
"language:pa",
"language:pl",
"language:pt",
"language:ro",
"language:sl",
"language:sr",
"language:sv",
"language:tr",
"language:uk",
"language:ur",
"license:cc-by-nc-nd-4.0",
"arxiv:2206.15147",
"region:us"
] | 2022-05-03T17:49:47+00:00 | {"language": ["af", "ar", "bn", "ca", "cs", "da", "de", "el", "eu", "fa", "fi", "fr", "gl", "hi", "hr", "it", "ja", "ko", "mt", "nl", false, "oc", "pa", "pl", "pt", "ro", "sl", "sr", "sv", "tr", "uk", "ur"], "license": "cc-by-nc-nd-4.0", "multilinguality": ["multilingual"], "size_categories": ["100B<n<1T"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"]} | 2023-05-11T21:29:21+00:00 | [
"2206.15147"
] | [
"af",
"ar",
"bn",
"ca",
"cs",
"da",
"de",
"el",
"eu",
"fa",
"fi",
"fr",
"gl",
"hi",
"hr",
"it",
"ja",
"ko",
"mt",
"nl",
"no",
"oc",
"pa",
"pl",
"pt",
"ro",
"sl",
"sr",
"sv",
"tr",
"uk",
"ur"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #multilinguality-multilingual #size_categories-100B<n<1T #source_datasets-original #language-Afrikaans #language-Arabic #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-Basque #language-Persian #language-Finnish #language-French #language-Galician #language-Hindi #language-Croatian #language-Italian #language-Japanese #language-Korean #language-Maltese #language-Dutch #language-Norwegian #language-Occitan (post 1500) #language-Panjabi #language-Polish #language-Portuguese #language-Romanian #language-Slovenian #language-Serbian #language-Swedish #language-Turkish #language-Ukrainian #language-Urdu #license-cc-by-nc-nd-4.0 #arxiv-2206.15147 #region-us
|
# esCorpius Multilingual Raw
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
# Usage
# Intended use
This corpus is the *raw version* of the esCorpius-m corpus. This corpus can be used for benchmarking deduplication tools.
## Other corpora
- esCorpius multilingual corpus (deduplicated): URL
- esCorpius original *Spanish-only* corpus (deduplicated): URL
Link to paper: URL / URL
Cite this work:
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
| [
"# esCorpius Multilingual Raw\nIn the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.",
"# Usage",
"# Intended use\nThis corpus is the *raw version* of the esCorpius-m corpus. This corpus can be used for benchmarking deduplication tools.",
"## Other corpora\n- esCorpius multilingual corpus (deduplicated): URL\n- esCorpius original *Spanish-only* corpus (deduplicated): URL\n\nLink to paper: URL / URL\n\nCite this work:",
"## Disclaimer\nWe did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #multilinguality-multilingual #size_categories-100B<n<1T #source_datasets-original #language-Afrikaans #language-Arabic #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-Basque #language-Persian #language-Finnish #language-French #language-Galician #language-Hindi #language-Croatian #language-Italian #language-Japanese #language-Korean #language-Maltese #language-Dutch #language-Norwegian #language-Occitan (post 1500) #language-Panjabi #language-Polish #language-Portuguese #language-Romanian #language-Slovenian #language-Serbian #language-Swedish #language-Turkish #language-Ukrainian #language-Urdu #license-cc-by-nc-nd-4.0 #arxiv-2206.15147 #region-us \n",
"# esCorpius Multilingual Raw\nIn the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.",
"# Usage",
"# Intended use\nThis corpus is the *raw version* of the esCorpius-m corpus. This corpus can be used for benchmarking deduplication tools.",
"## Other corpora\n- esCorpius multilingual corpus (deduplicated): URL\n- esCorpius original *Spanish-only* corpus (deduplicated): URL\n\nLink to paper: URL / URL\n\nCite this work:",
"## Disclaimer\nWe did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus."
] |
77840f2f84038fdf4b608fff764b21b7ef18eb34 |
# Dataset Card for Elsevier OA CC-By
## Table of Contents
- [Dataset Card for Elsevier OA CC-By](#dataset-card-for-elsevier-oa-cc-by)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs
- **Repository:** https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs
- **Paper:** https://arxiv.org/abs/2008.00774
- **Leaderboard:**
- **Point of Contact:** [@orieg](https://huggingface.co/orieg)
### Dataset Summary
Elsevier OA CC-By: This is a corpus of 40k (40,091) open access (OA) CC-BY articles from across Elsevier’s journals
representing a large scale, cross-discipline set of research data to support NLP and ML research. The corpus include full-text
articles published in 2014 to 2020 and are categorized in 27 Mid Level ASJC Code (subject classification).
***Distribution of Publication Years***
| Publication Year | Number of Articles |
| :---: | :---: |
| 2014 | 3018 |
| 2015 | 4438 |
| 2016 | 5913 |
| 2017 | 6419 |
| 2018 | 8016 |
| 2019 | 10135 |
| 2020 | 2159 |
***Distribution of Articles Per Mid Level ASJC Code. Each article can belong to multiple ASJC codes.***
| Discipline | Count |
| --- | ---: |
| General | 3847 |
| Agricultural and Biological Sciences | 4840 |
| Arts and Humanities | 982 |
| Biochemistry, Genetics and Molecular Biology | 8356 |
| Business, Management and Accounting | 937 |
| Chemical Engineering | 1878 |
| Chemistry | 2490 |
| Computer Science | 2039 |
| Decision Sciences | 406 |
| Earth and Planetary Sciences | 2393 |
| Economics, Econometrics and Finance | 976 |
| Energy | 2730 |
| Engineering | 4778 |
| Environmental Science | 6049 |
| Immunology and Microbiology | 3211 |
| Materials Science | 3477 |
| Mathematics | 538 |
| Medicine | 7273 |
| Neuroscience | 3669 |
| Nursing | 308 |
| Pharmacology, Toxicology and Pharmaceutics | 2405 |
| Physics and Astronomy | 2404 |
| Psychology | 1760 |
| Social Sciences | 3540 |
| Veterinary | 991 |
| Dentistry | 40 |
| Health Professions | 821 |
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`).
## Dataset Structure
### Data Instances
The original dataset was published with the following json structure:
```
{
"docId": <str>,
"metadata":{
"title": <str>,
"authors": [
{
"first": <str>,
"initial": <str>,
"last": <str>,
"email": <str>
},
...
],
"issn": <str>,
"volume": <str>,
"firstpage": <str>,
"lastpage": <str>,
"pub_year": <int>,
"doi": <str>,
"pmid": <str>,
"openaccess": "Full",
"subjareas": [<str>],
"keywords": [<str>],
"asjc": [<int>],
},
"abstract":[
{
"sentence": <str>,
"startOffset": <int>,
"endOffset": <int>
},
...
],
"bib_entries":{
"BIBREF0":{
"title":<str>,
"authors":[
{
"last":<str>,
"initial":<str>,
"first":<str>
},
...
],
"issn": <str>,
"volume": <str>,
"firstpage": <str>,
"lastpage": <str>,
"pub_year": <int>,
"doi": <str>,
"pmid": <str>
},
...
},
"body_text":[
{
"sentence": <str>,
"secId": <str>,
"startOffset": <int>,
"endOffset": <int>,
"title": <str>,
"refoffsets": {
<str>:{
"endOffset":<int>,
"startOffset":<int>
}
},
"parents": [
{
"id": <str>,
"title": <str>
},
...
]
},
...
]
}
```
***docId*** The docID is the identifier of the document. This is unique to the document, and can be resolved into a URL
for the document through the addition of `https//www.sciencedirect.com/science/pii/<docId>`
***abstract*** This is the author provided abstract for the document
***body_text*** The full text for the document. The text has been split on sentence boundaries, thus making it easier to
use across research projects. Each sentence has the title (and ID) of the section which it is from, along with titles (and
IDs) of the parent section. The highest-level section takes index 0 in the parents array. If the array is empty then the
title of the section for the sentence is the highest level section title. This will allow for the reconstruction of the article
structure. References have been extracted from the sentences. The IDs of the extracted reference and their respective
offset within the sentence can be found in the “refoffsets” field. The complete list of references are can be found in
the “bib_entry” field along with the references’ respective metadata. Some will be missing as we only keep ‘clean’
sentences,
***bib_entities*** All the references from within the document can be found in this section. If the meta data for the
reference is available, it has been added against the key for the reference. Where possible information such as the
document titles, authors, and relevant identifiers (DOI and PMID) are included. The keys for each reference can be
found in the sentence where the reference is used with the start and end offset of where in the sentence that reference
was used.
***metadata*** Meta data includes additional information about the article, such as list of authors, relevant IDs (DOI and
PMID). Along with a number of classification schemes such as ASJC and Subject Classification.
***author_highlights*** Author highlights were included in the corpus where the author(s) have provided them. The
coverage is 61% of all articles. The author highlights, consisting of 4 to 6 sentences, is provided by the author with
the aim of summarising the core findings and results in the article.
### Data Fields
* ***title***: This is the author provided title for the document. 100% coverage.
* ***abstract***: This is the author provided abstract for the document. 99.25% coverage.
* ***keywords***: This is the author and publisher provided keywords for the document. 100% coverage.
* ***asjc***: This is the disciplines for the document as represented by 334 ASJC (All Science Journal Classification) codes. 100% coverage.
* ***subjareas***: This is the Subject Classification for the document as represented by 27 ASJC top-level subject classifications. 100% coverage.
* ***body_text***: The full text for the document. 100% coverage.
* ***author_highlights***: This is the author provided highlights for the document. 61.31% coverage.
### Data Splits
***Distribution of Publication Years***
| | Train | Test | Validation |
| --- | :---: | :---: | :---: |
| All Articles | 32072 | 4009 | 4008 |
| With Author Highlights | 19644 | 2420 | 2514 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Date the data was collected: 2020-06-25T11:00:00.000Z
See the [original paper](https://doi.org/10.48550/arXiv.2008.00774) for more detail on the data collection process.
#### Who are the source language producers?
See `3.1 Data Sampling` in the [original paper](https://doi.org/10.48550/arXiv.2008.00774).
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{Kershaw2020ElsevierOC,
title = {Elsevier OA CC-By Corpus},
author = {Daniel James Kershaw and R. Koeling},
journal = {ArXiv},
year = {2020},
volume = {abs/2008.00774},
doi = {https://doi.org/10.48550/arXiv.2008.00774},
url = {https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs},
keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
abstract = {We introduce the Elsevier OA CC-BY corpus. This is the first open
corpus of Scientific Research papers which has a representative sample
from across scientific disciplines. This corpus not only includes the
full text of the article, but also the metadata of the documents,
along with the bibliographic information for each reference.}
}
```
```
@dataset{https://10.17632/zm33cdndxs.3,
doi = {10.17632/zm33cdndxs.2},
url = {https://data.mendeley.com/datasets/zm33cdndxs/3},
author = "Daniel Kershaw and Rob Koeling",
keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
title = {Elsevier OA CC-BY Corpus},
publisher = {Mendeley},
year = {2020},
month = {sep}
}
```
### Contributions
Thanks to [@orieg](https://github.com/orieg) for adding this dataset. | orieg/elsevier-oa-cc-by | [
"task_categories:fill-mask",
"task_categories:summarization",
"task_categories:text-classification",
"task_ids:masked-language-modeling",
"task_ids:news-articles-summarization",
"task_ids:news-articles-headline-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2008.00774",
"region:us"
] | 2022-05-03T21:13:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["fill-mask", "summarization", "text-classification"], "task_ids": ["masked-language-modeling", "news-articles-summarization", "news-articles-headline-generation"], "paperswithcode_id": "elsevier-oa-cc-by", "pretty_name": "Elsevier OA CC-By"} | 2022-07-01T14:59:58+00:00 | [
"2008.00774"
] | [
"en"
] | TAGS
#task_categories-fill-mask #task_categories-summarization #task_categories-text-classification #task_ids-masked-language-modeling #task_ids-news-articles-summarization #task_ids-news-articles-headline-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2008.00774 #region-us
| Dataset Card for Elsevier OA CC-By
==================================
Table of Contents
-----------------
* Dataset Card for Elsevier OA CC-By
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard:
* Point of Contact: @orieg
### Dataset Summary
Elsevier OA CC-By: This is a corpus of 40k (40,091) open access (OA) CC-BY articles from across Elsevier’s journals
representing a large scale, cross-discipline set of research data to support NLP and ML research. The corpus include full-text
articles published in 2014 to 2020 and are categorized in 27 Mid Level ASJC Code (subject classification).
*Distribution of Publication Years*
*Distribution of Articles Per Mid Level ASJC Code. Each article can belong to multiple ASJC codes.*
### Supported Tasks and Leaderboards
### Languages
English ('en').
Dataset Structure
-----------------
### Data Instances
The original dataset was published with the following json structure:
*docId* The docID is the identifier of the document. This is unique to the document, and can be resolved into a URL
for the document through the addition of 'https//URL
*abstract* This is the author provided abstract for the document
*body\_text* The full text for the document. The text has been split on sentence boundaries, thus making it easier to
use across research projects. Each sentence has the title (and ID) of the section which it is from, along with titles (and
IDs) of the parent section. The highest-level section takes index 0 in the parents array. If the array is empty then the
title of the section for the sentence is the highest level section title. This will allow for the reconstruction of the article
structure. References have been extracted from the sentences. The IDs of the extracted reference and their respective
offset within the sentence can be found in the “refoffsets” field. The complete list of references are can be found in
the “bib\_entry” field along with the references’ respective metadata. Some will be missing as we only keep ‘clean’
sentences,
*bib\_entities* All the references from within the document can be found in this section. If the meta data for the
reference is available, it has been added against the key for the reference. Where possible information such as the
document titles, authors, and relevant identifiers (DOI and PMID) are included. The keys for each reference can be
found in the sentence where the reference is used with the start and end offset of where in the sentence that reference
was used.
*metadata* Meta data includes additional information about the article, such as list of authors, relevant IDs (DOI and
PMID). Along with a number of classification schemes such as ASJC and Subject Classification.
*author\_highlights* Author highlights were included in the corpus where the author(s) have provided them. The
coverage is 61% of all articles. The author highlights, consisting of 4 to 6 sentences, is provided by the author with
the aim of summarising the core findings and results in the article.
### Data Fields
* *title*: This is the author provided title for the document. 100% coverage.
* *abstract*: This is the author provided abstract for the document. 99.25% coverage.
* *keywords*: This is the author and publisher provided keywords for the document. 100% coverage.
* *asjc*: This is the disciplines for the document as represented by 334 ASJC (All Science Journal Classification) codes. 100% coverage.
* *subjareas*: This is the Subject Classification for the document as represented by 27 ASJC top-level subject classifications. 100% coverage.
* *body\_text*: The full text for the document. 100% coverage.
* *author\_highlights*: This is the author provided highlights for the document. 61.31% coverage.
### Data Splits
*Distribution of Publication Years*
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
Date the data was collected: 2020-06-25T11:00:00.000Z
See the original paper for more detail on the data collection process.
#### Who are the source language producers?
See '3.1 Data Sampling' in the original paper.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
CC BY 4.0
### Contributions
Thanks to @orieg for adding this dataset.
| [
"### Dataset Summary\n\n\nElsevier OA CC-By: This is a corpus of 40k (40,091) open access (OA) CC-BY articles from across Elsevier’s journals\nrepresenting a large scale, cross-discipline set of research data to support NLP and ML research. The corpus include full-text\narticles published in 2014 to 2020 and are categorized in 27 Mid Level ASJC Code (subject classification).\n\n\n*Distribution of Publication Years*\n\n\n\n*Distribution of Articles Per Mid Level ASJC Code. Each article can belong to multiple ASJC codes.*",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish ('en').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe original dataset was published with the following json structure:\n\n\n*docId* The docID is the identifier of the document. This is unique to the document, and can be resolved into a URL\nfor the document through the addition of 'https//URL\n\n\n*abstract* This is the author provided abstract for the document\n\n\n*body\\_text* The full text for the document. The text has been split on sentence boundaries, thus making it easier to\nuse across research projects. Each sentence has the title (and ID) of the section which it is from, along with titles (and\nIDs) of the parent section. The highest-level section takes index 0 in the parents array. If the array is empty then the\ntitle of the section for the sentence is the highest level section title. This will allow for the reconstruction of the article\nstructure. References have been extracted from the sentences. The IDs of the extracted reference and their respective\noffset within the sentence can be found in the “refoffsets” field. The complete list of references are can be found in\nthe “bib\\_entry” field along with the references’ respective metadata. Some will be missing as we only keep ‘clean’\nsentences,\n\n\n*bib\\_entities* All the references from within the document can be found in this section. If the meta data for the\nreference is available, it has been added against the key for the reference. Where possible information such as the\ndocument titles, authors, and relevant identifiers (DOI and PMID) are included. The keys for each reference can be\nfound in the sentence where the reference is used with the start and end offset of where in the sentence that reference\nwas used.\n\n\n*metadata* Meta data includes additional information about the article, such as list of authors, relevant IDs (DOI and\nPMID). Along with a number of classification schemes such as ASJC and Subject Classification.\n\n\n*author\\_highlights* Author highlights were included in the corpus where the author(s) have provided them. The\ncoverage is 61% of all articles. The author highlights, consisting of 4 to 6 sentences, is provided by the author with\nthe aim of summarising the core findings and results in the article.",
"### Data Fields\n\n\n* *title*: This is the author provided title for the document. 100% coverage.\n* *abstract*: This is the author provided abstract for the document. 99.25% coverage.\n* *keywords*: This is the author and publisher provided keywords for the document. 100% coverage.\n* *asjc*: This is the disciplines for the document as represented by 334 ASJC (All Science Journal Classification) codes. 100% coverage.\n* *subjareas*: This is the Subject Classification for the document as represented by 27 ASJC top-level subject classifications. 100% coverage.\n* *body\\_text*: The full text for the document. 100% coverage.\n* *author\\_highlights*: This is the author provided highlights for the document. 61.31% coverage.",
"### Data Splits\n\n\n*Distribution of Publication Years*\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nDate the data was collected: 2020-06-25T11:00:00.000Z\n\n\nSee the original paper for more detail on the data collection process.",
"#### Who are the source language producers?\n\n\nSee '3.1 Data Sampling' in the original paper.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCC BY 4.0",
"### Contributions\n\n\nThanks to @orieg for adding this dataset."
] | [
"TAGS\n#task_categories-fill-mask #task_categories-summarization #task_categories-text-classification #task_ids-masked-language-modeling #task_ids-news-articles-summarization #task_ids-news-articles-headline-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2008.00774 #region-us \n",
"### Dataset Summary\n\n\nElsevier OA CC-By: This is a corpus of 40k (40,091) open access (OA) CC-BY articles from across Elsevier’s journals\nrepresenting a large scale, cross-discipline set of research data to support NLP and ML research. The corpus include full-text\narticles published in 2014 to 2020 and are categorized in 27 Mid Level ASJC Code (subject classification).\n\n\n*Distribution of Publication Years*\n\n\n\n*Distribution of Articles Per Mid Level ASJC Code. Each article can belong to multiple ASJC codes.*",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish ('en').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe original dataset was published with the following json structure:\n\n\n*docId* The docID is the identifier of the document. This is unique to the document, and can be resolved into a URL\nfor the document through the addition of 'https//URL\n\n\n*abstract* This is the author provided abstract for the document\n\n\n*body\\_text* The full text for the document. The text has been split on sentence boundaries, thus making it easier to\nuse across research projects. Each sentence has the title (and ID) of the section which it is from, along with titles (and\nIDs) of the parent section. The highest-level section takes index 0 in the parents array. If the array is empty then the\ntitle of the section for the sentence is the highest level section title. This will allow for the reconstruction of the article\nstructure. References have been extracted from the sentences. The IDs of the extracted reference and their respective\noffset within the sentence can be found in the “refoffsets” field. The complete list of references are can be found in\nthe “bib\\_entry” field along with the references’ respective metadata. Some will be missing as we only keep ‘clean’\nsentences,\n\n\n*bib\\_entities* All the references from within the document can be found in this section. If the meta data for the\nreference is available, it has been added against the key for the reference. Where possible information such as the\ndocument titles, authors, and relevant identifiers (DOI and PMID) are included. The keys for each reference can be\nfound in the sentence where the reference is used with the start and end offset of where in the sentence that reference\nwas used.\n\n\n*metadata* Meta data includes additional information about the article, such as list of authors, relevant IDs (DOI and\nPMID). Along with a number of classification schemes such as ASJC and Subject Classification.\n\n\n*author\\_highlights* Author highlights were included in the corpus where the author(s) have provided them. The\ncoverage is 61% of all articles. The author highlights, consisting of 4 to 6 sentences, is provided by the author with\nthe aim of summarising the core findings and results in the article.",
"### Data Fields\n\n\n* *title*: This is the author provided title for the document. 100% coverage.\n* *abstract*: This is the author provided abstract for the document. 99.25% coverage.\n* *keywords*: This is the author and publisher provided keywords for the document. 100% coverage.\n* *asjc*: This is the disciplines for the document as represented by 334 ASJC (All Science Journal Classification) codes. 100% coverage.\n* *subjareas*: This is the Subject Classification for the document as represented by 27 ASJC top-level subject classifications. 100% coverage.\n* *body\\_text*: The full text for the document. 100% coverage.\n* *author\\_highlights*: This is the author provided highlights for the document. 61.31% coverage.",
"### Data Splits\n\n\n*Distribution of Publication Years*\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nDate the data was collected: 2020-06-25T11:00:00.000Z\n\n\nSee the original paper for more detail on the data collection process.",
"#### Who are the source language producers?\n\n\nSee '3.1 Data Sampling' in the original paper.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCC BY 4.0",
"### Contributions\n\n\nThanks to @orieg for adding this dataset."
] |
aee7fea371b991a01db75877fd23e37d381379c0 |
# Dataset Card for Ukhushn/home-depot
| Ukhushn/home-depot | [
"task_categories:sentence-similarity",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-05-04T03:13:06+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["sentence-similarity"], "task_ids": [], "pretty_name": "Ukhushn/home-depot", "language_bcp47": ["en-US"]} | 2022-10-25T09:20:53+00:00 | [] | [
"en"
] | TAGS
#task_categories-sentence-similarity #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-afl-3.0 #region-us
|
# Dataset Card for Ukhushn/home-depot
| [
"# Dataset Card for Ukhushn/home-depot"
] | [
"TAGS\n#task_categories-sentence-similarity #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-afl-3.0 #region-us \n",
"# Dataset Card for Ukhushn/home-depot"
] |
ad1898d0b872d98ab4747b07315385a1736ce36b | This is used for the pix2seq model. It contains the preprocessed pixel values of the cats image, useful for converting. | nielsr/test | [
"region:us"
] | 2022-05-04T10:28:43+00:00 | {} | 2022-05-04T11:09:13+00:00 | [] | [] | TAGS
#region-us
| This is used for the pix2seq model. It contains the preprocessed pixel values of the cats image, useful for converting. | [] | [
"TAGS\n#region-us \n"
] |
30a47cfa557c996f64903f01250799d453be6215 |
# Dataset Card for TextVQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://textvqa.org
- **Repository:** https://github.com/facebookresearch/mmf
- **Paper:** https://arxiv.org/abs/1904.08920
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/874/overview
- **Point of Contact:** mailto:[email protected]
### Dataset Summary
TextVQA requires models to read and reason about text in images to answer questions about them.
Specifically, models need to incorporate a new modality of text present in the images and reason
over it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images
from the OpenImages dataset. The dataset uses [VQA accuracy](https://visualqa.org/evaluation.html) metric for evaluation.
### Supported Tasks and Leaderboards
- `visual-question-answering`: The dataset can be used for Visual Question Answering tasks where given an image, you have to answer a question based on the image. For the TextVQA dataset specifically, the questions require reading and reasoning about the scene text in the given image.
### Languages
The questions in the dataset are in English.
## Dataset Structure
### Data Instances
A typical sample mainly contains the question in `question` field, an image object in `image` field, OpenImage image id in `image_id` and lot of other useful metadata. 10 answers per questions are contained in the `answers` attribute. For test set, 10 empty strings are contained in the `answers` field as the answers are not available for it.
An example look like below:
```
{'question': 'who is this copyrighted by?',
'image_id': '00685bc495504d61',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x276021C5EB8>,
'image_classes': ['Vehicle', 'Tower', 'Airplane', 'Aircraft'],
'flickr_original_url': 'https://farm2.staticflickr.com/5067/5620759429_4ea686e643_o.jpg',
'flickr_300k_url': 'https://c5.staticflickr.com/6/5067/5620759429_f43a649fb5_z.jpg',
'image_width': 786,
'image_height': 1024,
'answers': ['simon clancy',
'simon ciancy',
'simon clancy',
'simon clancy',
'the brand is bayard',
'simon clancy',
'simon clancy',
'simon clancy',
'simon clancy',
'simon clancy'],
'question_tokens': ['who', 'is', 'this', 'copyrighted', 'by'],
'question_id': 3,
'set_name': 'train'
},
```
### Data Fields
- `question`: string, the question that is being asked about the image
- `image_id`: string, id of the image which is same as the OpenImages id
- `image`: A `PIL.Image.Image` object containing the image about which the question is being asked. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `image_classes`: List[str], The OpenImages classes to which the image belongs to.
- `flickr_original_url`: string, URL to original image on Flickr
- `flickr_300k_url`: string, URL to resized and low-resolution image on Flickr.
- `image_width`: int, Width of the original image.
- `image_height`: int, Height of the original image.
- `question_tokens`: List[str], A pre-tokenized list of question.
- `answers`: List[str], List of 10 human-annotated answers for the question. These 10 answers are collected from 10 different users. The list will contain empty strings for test set for which we don't have the answers.
- `question_id`: int, Unique id of the question.
- `set_name`: string, the set to which this question belongs.
### Data Splits
There are three splits. `train`, `validation` and `test`. The `train` and `validation` sets share images with OpenImages `train` set and have their answers available. For test set answers, we return a list of ten empty strings. To get inference results and numbers on `test` set, you need to go to the [EvalAI leaderboard](https://eval.ai/web/challenges/challenge-page/874/overview) and upload your predictions there. Please see instructions at [https://textvqa.org/challenge/](https://textvqa.org/challenge/).
## Dataset Creation
### Curation Rationale
From the paper:
> Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today’s VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new “TextVQA” dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer.
### Source Data
#### Initial Data Collection and Normalization
The initial images were sourced from [OpenImages](https://storage.googleapis.com/openimages/web/factsfigures_v4.html) v4 dataset. These were first filtered based on automatic heuristics using an OCR system where we only took images which had at least some text detected in them. See [annotation process](#annotation-process) section to understand the next stages.
#### Who are the source language producers?
English Crowdsource Annotators
### Annotations
#### Annotation process
After the automatic process of filter the images that contain text, the images were manually verified using human annotators making sure that they had text. In next stage, the annotators were asked to write questions involving scene text for the image. For some images, in this stage, two questions were collected whenever possible. Finally, in the last stage, ten different human annotators answer the questions asked in last stage.
#### Who are the annotators?
Annotators are from one of the major data collection platforms such as AMT. Exact details are not mentioned in the paper.
### Personal and Sensitive Information
The dataset does have similar PII issues as OpenImages and can at some times contain human faces, license plates, and documents. Using provided `image_classes` data field is one option to try to filter out some of this information.
## Considerations for Using the Data
### Social Impact of Dataset
The paper helped realize the importance of scene text recognition and reasoning in general purpose machine learning applications and has led to many follow-up works including [TextCaps](https://textvqa.org/textcaps) and [TextOCR](https://textvqa.org/textocr). Similar datasets were introduced over the time which specifically focus on sight-disabled users such as [VizWiz](https://vizwiz.org) or focusing specifically on the same problem as TextVQA like [STVQA](https://paperswithcode.com/dataset/st-vqa), [DocVQA](https://arxiv.org/abs/2007.00398v3) and [OCRVQA](https://ocr-vqa.github.io/). Currently, most methods train on combined dataset from TextVQA and STVQA to achieve state-of-the-art performance on both datasets.
### Discussion of Biases
Question-only bias where a model is able to answer the question without even looking at the image is discussed in the [paper](https://arxiv.org/abs/1904.08920) which was a major issue with original VQA dataset. The outlier bias in answers is prevented by collecting 10 different answers which are also taken in consideration by the evaluation metric.
### Other Known Limitations
- The dataset is english only but does involve images with non-English latin characters so can involve some multi-lingual understanding.
- The performance on the dataset is also dependent on the quality of OCR used as the OCR errors can directly lead to wrong answers.
- The metric used for calculating accuracy is same as [VQA accuracy](https://visualqa.org/evaluation.html). This involves one-to-one matching with the given answers and thus doesn't allow analyzing one-off errors through OCR.
## Additional Information
### Dataset Curators
- [Amanpreet Singh](https://github.com/apsdehal)
- Vivek Natarjan
- Meet Shah
- Yu Jiang
- Xinlei Chen
- Dhruv Batra
- Devi Parikh
- Marcus Rohrbach
### Licensing Information
CC by 4.0
### Citation Information
```bibtex
@inproceedings{singh2019towards,
title={Towards VQA Models That Can Read},
author={Singh, Amanpreet and Natarjan, Vivek and Shah, Meet and Jiang, Yu and Chen, Xinlei and Batra, Dhruv and Parikh, Devi and Rohrbach, Marcus},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={8317-8326},
year={2019}
}
```
### Contributions
Thanks to [@apsdehal](https://github.com/apsdehal) for adding this dataset. | textvqa | [
"task_categories:visual-question-answering",
"task_ids:visual-question-answering",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:1904.08920",
"arxiv:2007.00398",
"region:us"
] | 2022-05-05T05:44:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["visual-question-answering"], "task_ids": ["visual-question-answering"], "pretty_name": "TextVQA", "dataset_info": [{"config_name": "train", "features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21381310, "num_examples": 34602}, {"name": "validation", "num_bytes": 3077854, "num_examples": 5000}, {"name": "test", "num_bytes": 3025046, "num_examples": 5734}], "download_size": 8070116310, "dataset_size": 27484210}, {"config_name": "val", "features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21381310, "num_examples": 34602}, {"name": "validation", "num_bytes": 3077854, "num_examples": 5000}, {"name": "test", "num_bytes": 3025046, "num_examples": 5734}], "download_size": 8070116310, "dataset_size": 27484210}, {"config_name": "test", "features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21381310, "num_examples": 34602}, {"name": "validation", "num_bytes": 3077854, "num_examples": 5000}, {"name": "test", "num_bytes": 3025046, "num_examples": 5734}], "download_size": 8070116310, "dataset_size": 27484210}, {"config_name": "textvqa", "features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22073350, "num_examples": 34602}, {"name": "validation", "num_bytes": 3177854, "num_examples": 5000}, {"name": "test", "num_bytes": 3139726, "num_examples": 5734}], "download_size": 8070116310, "dataset_size": 28390930}]} | 2024-01-18T11:19:08+00:00 | [
"1904.08920",
"2007.00398"
] | [
"en"
] | TAGS
#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1904.08920 #arxiv-2007.00398 #region-us
|
# Dataset Card for TextVQA
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: mailto:amanpreet@URL
### Dataset Summary
TextVQA requires models to read and reason about text in images to answer questions about them.
Specifically, models need to incorporate a new modality of text present in the images and reason
over it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images
from the OpenImages dataset. The dataset uses VQA accuracy metric for evaluation.
### Supported Tasks and Leaderboards
- 'visual-question-answering': The dataset can be used for Visual Question Answering tasks where given an image, you have to answer a question based on the image. For the TextVQA dataset specifically, the questions require reading and reasoning about the scene text in the given image.
### Languages
The questions in the dataset are in English.
## Dataset Structure
### Data Instances
A typical sample mainly contains the question in 'question' field, an image object in 'image' field, OpenImage image id in 'image_id' and lot of other useful metadata. 10 answers per questions are contained in the 'answers' attribute. For test set, 10 empty strings are contained in the 'answers' field as the answers are not available for it.
An example look like below:
### Data Fields
- 'question': string, the question that is being asked about the image
- 'image_id': string, id of the image which is same as the OpenImages id
- 'image': A 'PIL.Image.Image' object containing the image about which the question is being asked. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'.
- 'image_classes': List[str], The OpenImages classes to which the image belongs to.
- 'flickr_original_url': string, URL to original image on Flickr
- 'flickr_300k_url': string, URL to resized and low-resolution image on Flickr.
- 'image_width': int, Width of the original image.
- 'image_height': int, Height of the original image.
- 'question_tokens': List[str], A pre-tokenized list of question.
- 'answers': List[str], List of 10 human-annotated answers for the question. These 10 answers are collected from 10 different users. The list will contain empty strings for test set for which we don't have the answers.
- 'question_id': int, Unique id of the question.
- 'set_name': string, the set to which this question belongs.
### Data Splits
There are three splits. 'train', 'validation' and 'test'. The 'train' and 'validation' sets share images with OpenImages 'train' set and have their answers available. For test set answers, we return a list of ten empty strings. To get inference results and numbers on 'test' set, you need to go to the EvalAI leaderboard and upload your predictions there. Please see instructions at URL
## Dataset Creation
### Curation Rationale
From the paper:
> Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today’s VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new “TextVQA” dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer.
### Source Data
#### Initial Data Collection and Normalization
The initial images were sourced from OpenImages v4 dataset. These were first filtered based on automatic heuristics using an OCR system where we only took images which had at least some text detected in them. See annotation process section to understand the next stages.
#### Who are the source language producers?
English Crowdsource Annotators
### Annotations
#### Annotation process
After the automatic process of filter the images that contain text, the images were manually verified using human annotators making sure that they had text. In next stage, the annotators were asked to write questions involving scene text for the image. For some images, in this stage, two questions were collected whenever possible. Finally, in the last stage, ten different human annotators answer the questions asked in last stage.
#### Who are the annotators?
Annotators are from one of the major data collection platforms such as AMT. Exact details are not mentioned in the paper.
### Personal and Sensitive Information
The dataset does have similar PII issues as OpenImages and can at some times contain human faces, license plates, and documents. Using provided 'image_classes' data field is one option to try to filter out some of this information.
## Considerations for Using the Data
### Social Impact of Dataset
The paper helped realize the importance of scene text recognition and reasoning in general purpose machine learning applications and has led to many follow-up works including TextCaps and TextOCR. Similar datasets were introduced over the time which specifically focus on sight-disabled users such as VizWiz or focusing specifically on the same problem as TextVQA like STVQA, DocVQA and OCRVQA. Currently, most methods train on combined dataset from TextVQA and STVQA to achieve state-of-the-art performance on both datasets.
### Discussion of Biases
Question-only bias where a model is able to answer the question without even looking at the image is discussed in the paper which was a major issue with original VQA dataset. The outlier bias in answers is prevented by collecting 10 different answers which are also taken in consideration by the evaluation metric.
### Other Known Limitations
- The dataset is english only but does involve images with non-English latin characters so can involve some multi-lingual understanding.
- The performance on the dataset is also dependent on the quality of OCR used as the OCR errors can directly lead to wrong answers.
- The metric used for calculating accuracy is same as VQA accuracy. This involves one-to-one matching with the given answers and thus doesn't allow analyzing one-off errors through OCR.
## Additional Information
### Dataset Curators
- Amanpreet Singh
- Vivek Natarjan
- Meet Shah
- Yu Jiang
- Xinlei Chen
- Dhruv Batra
- Devi Parikh
- Marcus Rohrbach
### Licensing Information
CC by 4.0
### Contributions
Thanks to @apsdehal for adding this dataset. | [
"# Dataset Card for TextVQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: mailto:amanpreet@URL",
"### Dataset Summary\n\nTextVQA requires models to read and reason about text in images to answer questions about them.\nSpecifically, models need to incorporate a new modality of text present in the images and reason\nover it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images\nfrom the OpenImages dataset. The dataset uses VQA accuracy metric for evaluation.",
"### Supported Tasks and Leaderboards\n\n- 'visual-question-answering': The dataset can be used for Visual Question Answering tasks where given an image, you have to answer a question based on the image. For the TextVQA dataset specifically, the questions require reading and reasoning about the scene text in the given image.",
"### Languages\n\nThe questions in the dataset are in English.",
"## Dataset Structure",
"### Data Instances\n\nA typical sample mainly contains the question in 'question' field, an image object in 'image' field, OpenImage image id in 'image_id' and lot of other useful metadata. 10 answers per questions are contained in the 'answers' attribute. For test set, 10 empty strings are contained in the 'answers' field as the answers are not available for it.\n\nAn example look like below:",
"### Data Fields\n\n- 'question': string, the question that is being asked about the image\n- 'image_id': string, id of the image which is same as the OpenImages id\n- 'image': A 'PIL.Image.Image' object containing the image about which the question is being asked. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n- 'image_classes': List[str], The OpenImages classes to which the image belongs to.\n- 'flickr_original_url': string, URL to original image on Flickr\n- 'flickr_300k_url': string, URL to resized and low-resolution image on Flickr.\n- 'image_width': int, Width of the original image.\n- 'image_height': int, Height of the original image.\n- 'question_tokens': List[str], A pre-tokenized list of question.\n- 'answers': List[str], List of 10 human-annotated answers for the question. These 10 answers are collected from 10 different users. The list will contain empty strings for test set for which we don't have the answers.\n- 'question_id': int, Unique id of the question.\n- 'set_name': string, the set to which this question belongs.",
"### Data Splits\n\nThere are three splits. 'train', 'validation' and 'test'. The 'train' and 'validation' sets share images with OpenImages 'train' set and have their answers available. For test set answers, we return a list of ten empty strings. To get inference results and numbers on 'test' set, you need to go to the EvalAI leaderboard and upload your predictions there. Please see instructions at URL",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n\n> Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today’s VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new “TextVQA” dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe initial images were sourced from OpenImages v4 dataset. These were first filtered based on automatic heuristics using an OCR system where we only took images which had at least some text detected in them. See annotation process section to understand the next stages.",
"#### Who are the source language producers?\n\nEnglish Crowdsource Annotators",
"### Annotations",
"#### Annotation process\n\n After the automatic process of filter the images that contain text, the images were manually verified using human annotators making sure that they had text. In next stage, the annotators were asked to write questions involving scene text for the image. For some images, in this stage, two questions were collected whenever possible. Finally, in the last stage, ten different human annotators answer the questions asked in last stage.",
"#### Who are the annotators?\n\nAnnotators are from one of the major data collection platforms such as AMT. Exact details are not mentioned in the paper.",
"### Personal and Sensitive Information\n\nThe dataset does have similar PII issues as OpenImages and can at some times contain human faces, license plates, and documents. Using provided 'image_classes' data field is one option to try to filter out some of this information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe paper helped realize the importance of scene text recognition and reasoning in general purpose machine learning applications and has led to many follow-up works including TextCaps and TextOCR. Similar datasets were introduced over the time which specifically focus on sight-disabled users such as VizWiz or focusing specifically on the same problem as TextVQA like STVQA, DocVQA and OCRVQA. Currently, most methods train on combined dataset from TextVQA and STVQA to achieve state-of-the-art performance on both datasets.",
"### Discussion of Biases\n\nQuestion-only bias where a model is able to answer the question without even looking at the image is discussed in the paper which was a major issue with original VQA dataset. The outlier bias in answers is prevented by collecting 10 different answers which are also taken in consideration by the evaluation metric.",
"### Other Known Limitations\n\n- The dataset is english only but does involve images with non-English latin characters so can involve some multi-lingual understanding.\n- The performance on the dataset is also dependent on the quality of OCR used as the OCR errors can directly lead to wrong answers.\n- The metric used for calculating accuracy is same as VQA accuracy. This involves one-to-one matching with the given answers and thus doesn't allow analyzing one-off errors through OCR.",
"## Additional Information",
"### Dataset Curators\n\n- Amanpreet Singh\n- Vivek Natarjan\n- Meet Shah\n- Yu Jiang\n- Xinlei Chen\n- Dhruv Batra\n- Devi Parikh\n- Marcus Rohrbach",
"### Licensing Information\n\nCC by 4.0",
"### Contributions\n\nThanks to @apsdehal for adding this dataset."
] | [
"TAGS\n#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1904.08920 #arxiv-2007.00398 #region-us \n",
"# Dataset Card for TextVQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: mailto:amanpreet@URL",
"### Dataset Summary\n\nTextVQA requires models to read and reason about text in images to answer questions about them.\nSpecifically, models need to incorporate a new modality of text present in the images and reason\nover it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images\nfrom the OpenImages dataset. The dataset uses VQA accuracy metric for evaluation.",
"### Supported Tasks and Leaderboards\n\n- 'visual-question-answering': The dataset can be used for Visual Question Answering tasks where given an image, you have to answer a question based on the image. For the TextVQA dataset specifically, the questions require reading and reasoning about the scene text in the given image.",
"### Languages\n\nThe questions in the dataset are in English.",
"## Dataset Structure",
"### Data Instances\n\nA typical sample mainly contains the question in 'question' field, an image object in 'image' field, OpenImage image id in 'image_id' and lot of other useful metadata. 10 answers per questions are contained in the 'answers' attribute. For test set, 10 empty strings are contained in the 'answers' field as the answers are not available for it.\n\nAn example look like below:",
"### Data Fields\n\n- 'question': string, the question that is being asked about the image\n- 'image_id': string, id of the image which is same as the OpenImages id\n- 'image': A 'PIL.Image.Image' object containing the image about which the question is being asked. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n- 'image_classes': List[str], The OpenImages classes to which the image belongs to.\n- 'flickr_original_url': string, URL to original image on Flickr\n- 'flickr_300k_url': string, URL to resized and low-resolution image on Flickr.\n- 'image_width': int, Width of the original image.\n- 'image_height': int, Height of the original image.\n- 'question_tokens': List[str], A pre-tokenized list of question.\n- 'answers': List[str], List of 10 human-annotated answers for the question. These 10 answers are collected from 10 different users. The list will contain empty strings for test set for which we don't have the answers.\n- 'question_id': int, Unique id of the question.\n- 'set_name': string, the set to which this question belongs.",
"### Data Splits\n\nThere are three splits. 'train', 'validation' and 'test'. The 'train' and 'validation' sets share images with OpenImages 'train' set and have their answers available. For test set answers, we return a list of ten empty strings. To get inference results and numbers on 'test' set, you need to go to the EvalAI leaderboard and upload your predictions there. Please see instructions at URL",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n\n> Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today’s VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new “TextVQA” dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe initial images were sourced from OpenImages v4 dataset. These were first filtered based on automatic heuristics using an OCR system where we only took images which had at least some text detected in them. See annotation process section to understand the next stages.",
"#### Who are the source language producers?\n\nEnglish Crowdsource Annotators",
"### Annotations",
"#### Annotation process\n\n After the automatic process of filter the images that contain text, the images were manually verified using human annotators making sure that they had text. In next stage, the annotators were asked to write questions involving scene text for the image. For some images, in this stage, two questions were collected whenever possible. Finally, in the last stage, ten different human annotators answer the questions asked in last stage.",
"#### Who are the annotators?\n\nAnnotators are from one of the major data collection platforms such as AMT. Exact details are not mentioned in the paper.",
"### Personal and Sensitive Information\n\nThe dataset does have similar PII issues as OpenImages and can at some times contain human faces, license plates, and documents. Using provided 'image_classes' data field is one option to try to filter out some of this information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe paper helped realize the importance of scene text recognition and reasoning in general purpose machine learning applications and has led to many follow-up works including TextCaps and TextOCR. Similar datasets were introduced over the time which specifically focus on sight-disabled users such as VizWiz or focusing specifically on the same problem as TextVQA like STVQA, DocVQA and OCRVQA. Currently, most methods train on combined dataset from TextVQA and STVQA to achieve state-of-the-art performance on both datasets.",
"### Discussion of Biases\n\nQuestion-only bias where a model is able to answer the question without even looking at the image is discussed in the paper which was a major issue with original VQA dataset. The outlier bias in answers is prevented by collecting 10 different answers which are also taken in consideration by the evaluation metric.",
"### Other Known Limitations\n\n- The dataset is english only but does involve images with non-English latin characters so can involve some multi-lingual understanding.\n- The performance on the dataset is also dependent on the quality of OCR used as the OCR errors can directly lead to wrong answers.\n- The metric used for calculating accuracy is same as VQA accuracy. This involves one-to-one matching with the given answers and thus doesn't allow analyzing one-off errors through OCR.",
"## Additional Information",
"### Dataset Curators\n\n- Amanpreet Singh\n- Vivek Natarjan\n- Meet Shah\n- Yu Jiang\n- Xinlei Chen\n- Dhruv Batra\n- Devi Parikh\n- Marcus Rohrbach",
"### Licensing Information\n\nCC by 4.0",
"### Contributions\n\nThanks to @apsdehal for adding this dataset."
] |
45fcf91a8a00c86ad4b4c2034399dda55c5c9adc |
# Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset)
- **Repository:** https://github.com/zhouhaoyi/ETDataset
- **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436)
- **Point of Contact:** [Haoyi Zhou](mailto:[email protected])
### Dataset Summary
The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.
Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.
The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup:
* **H**igh **U**se**F**ul **L**oad
* **H**igh **U**se**L**ess **L**oad
* **M**iddle **U**se**F**ul **L**oad
* **M**iddle **U**se**L**ess **L**oad
* **L**ow **U**se**F**ul **L**oad
* **L**ow **U**se**L**ess **L**oad
### Dataset Usage
To load a particular variant of the dataset just specify its name e.g:
```python
load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer
```
or to specify a prediction length:
```python
load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours)
```
### Supported Tasks and Leaderboards
The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': 'OT'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The time series data is split into train/val/test set of 12/4/4 months respectively.
## Dataset Creation
### Curation Rationale
Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Haoyi Zhou](mailto:[email protected])
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@inproceedings{haoyietal-informer-2021,
author = {Haoyi Zhou and
Shanghang Zhang and
Jieqi Peng and
Shuai Zhang and
Jianxin Li and
Hui Xiong and
Wancai Zhang},
title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
volume = {35},
number = {12},
pages = {11106--11115},
publisher = {{AAAI} Press},
year = {2021},
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. | ett | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"arxiv:2012.07436",
"region:us"
] | 2022-05-05T11:12:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": [], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["time-series-forecasting"], "task_ids": ["univariate-time-series-forecasting", "multivariate-time-series-forecasting"], "pretty_name": "Electricity Transformer Temperature", "dataset_info": [{"config_name": "h1", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 241978, "num_examples": 1}, {"name": "test", "num_bytes": 77508960, "num_examples": 240}, {"name": "validation", "num_bytes": 33916080, "num_examples": 120}], "download_size": 2589657, "dataset_size": 111667018}, {"config_name": "h2", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 241978, "num_examples": 1}, {"name": "test", "num_bytes": 77508960, "num_examples": 240}, {"name": "validation", "num_bytes": 33916080, "num_examples": 120}], "download_size": 2417960, "dataset_size": 111667018}, {"config_name": "m1", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 967738, "num_examples": 1}, {"name": "test", "num_bytes": 1239008640, "num_examples": 960}, {"name": "validation", "num_bytes": 542089920, "num_examples": 480}], "download_size": 10360719, "dataset_size": 1782066298}, {"config_name": "m2", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 967738, "num_examples": 1}, {"name": "test", "num_bytes": 1239008640, "num_examples": 960}, {"name": "validation", "num_bytes": 542089920, "num_examples": 480}], "download_size": 9677236, "dataset_size": 1782066298}]} | 2024-01-18T11:19:09+00:00 | [
"2012.07436"
] | [] | TAGS
#task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #arxiv-2012.07436 #region-us
|
# Dataset Card for Electricity Transformer Temperature
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Electricity Transformer Dataset
- Repository: URL
- Paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
- Point of Contact: Haoyi Zhou
### Dataset Summary
The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.
Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an '1H' (hourly) or '15T' (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.
The target time series is the Oil Temperature and the dataset comes with the following 6 covariates in the univariate setup:
* High UseFul Load
* High UseLess Load
* Middle UseFul Load
* Middle UseLess Load
* Low UseFul Load
* Low UseLess Load
### Dataset Usage
To load a particular variant of the dataset just specify its name e.g:
or to specify a prediction length:
### Supported Tasks and Leaderboards
The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.
#### 'time-series-forecasting'
##### 'univariate-time-series-forecasting'
The univariate time series forecasting tasks involves learning the future one dimensional 'target' values of a time series in a dataset for some 'prediction_length' time steps. The performance of the forecast models can then be validated via the ground truth in the 'validation' split and tested via the 'test' split. The covriates are stored in the 'feat_dynamic_real' key of each time series.
##### 'multivariate-time-series-forecasting'
The multivariate time series forecasting task involves learning the future vector of 'target' values of a time series in a dataset for some 'prediction_length' time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the 'validation' split and tested via the 'test' split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
### Data Fields
For the univariate regular time series each series has the following keys:
* 'start': a datetime of the first entry of each time series in the dataset
* 'target': an array[float32] of the actual target values
* 'feat_static_cat': an array[uint64] which contains a categorical identifier of each time series in the dataset
* 'feat_dynamic_real': optional array of covariate features
* 'item_id': a string identifier of each time series in a dataset for reference
For the multivariate time series the 'target' is a vector of the multivariate dimension for each time point.
### Data Splits
The time series data is split into train/val/test set of 12/4/4 months respectively.
## Dataset Creation
### Curation Rationale
Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
* Haoyi Zhou
### Licensing Information
Creative Commons Attribution 4.0 International
### Contributions
Thanks to @kashif for adding this dataset. | [
"# Dataset Card for Electricity Transformer Temperature",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Electricity Transformer Dataset\n- Repository: URL\n- Paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting\n- Point of Contact: Haoyi Zhou",
"### Dataset Summary\n\nThe electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.\n\nSpecifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an '1H' (hourly) or '15T' (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.\n\nThe target time series is the Oil Temperature and the dataset comes with the following 6 covariates in the univariate setup:\n* High UseFul Load \n* High UseLess Load\n* Middle UseFul Load \n* Middle UseLess Load \n* Low UseFul Load \n* Low UseLess Load",
"### Dataset Usage\n\nTo load a particular variant of the dataset just specify its name e.g:\n\n\n\nor to specify a prediction length:",
"### Supported Tasks and Leaderboards\n\nThe time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.",
"#### 'time-series-forecasting'",
"##### 'univariate-time-series-forecasting'\n\nThe univariate time series forecasting tasks involves learning the future one dimensional 'target' values of a time series in a dataset for some 'prediction_length' time steps. The performance of the forecast models can then be validated via the ground truth in the 'validation' split and tested via the 'test' split. The covriates are stored in the 'feat_dynamic_real' key of each time series.",
"##### 'multivariate-time-series-forecasting'\n\nThe multivariate time series forecasting task involves learning the future vector of 'target' values of a time series in a dataset for some 'prediction_length' time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the 'validation' split and tested via the 'test' split.",
"### Languages",
"## Dataset Structure",
"### Data Instances\n\nA sample from the training set is provided below:",
"### Data Fields\n\nFor the univariate regular time series each series has the following keys:\n\n* 'start': a datetime of the first entry of each time series in the dataset\n* 'target': an array[float32] of the actual target values\n* 'feat_static_cat': an array[uint64] which contains a categorical identifier of each time series in the dataset\n* 'feat_dynamic_real': optional array of covariate features\n* 'item_id': a string identifier of each time series in a dataset for reference\n\nFor the multivariate time series the 'target' is a vector of the multivariate dimension for each time point.",
"### Data Splits\n\nThe time series data is split into train/val/test set of 12/4/4 months respectively.",
"## Dataset Creation",
"### Curation Rationale\n\nDevelop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n* Haoyi Zhou",
"### Licensing Information\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\nThanks to @kashif for adding this dataset."
] | [
"TAGS\n#task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #arxiv-2012.07436 #region-us \n",
"# Dataset Card for Electricity Transformer Temperature",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Electricity Transformer Dataset\n- Repository: URL\n- Paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting\n- Point of Contact: Haoyi Zhou",
"### Dataset Summary\n\nThe electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.\n\nSpecifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an '1H' (hourly) or '15T' (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.\n\nThe target time series is the Oil Temperature and the dataset comes with the following 6 covariates in the univariate setup:\n* High UseFul Load \n* High UseLess Load\n* Middle UseFul Load \n* Middle UseLess Load \n* Low UseFul Load \n* Low UseLess Load",
"### Dataset Usage\n\nTo load a particular variant of the dataset just specify its name e.g:\n\n\n\nor to specify a prediction length:",
"### Supported Tasks and Leaderboards\n\nThe time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.",
"#### 'time-series-forecasting'",
"##### 'univariate-time-series-forecasting'\n\nThe univariate time series forecasting tasks involves learning the future one dimensional 'target' values of a time series in a dataset for some 'prediction_length' time steps. The performance of the forecast models can then be validated via the ground truth in the 'validation' split and tested via the 'test' split. The covriates are stored in the 'feat_dynamic_real' key of each time series.",
"##### 'multivariate-time-series-forecasting'\n\nThe multivariate time series forecasting task involves learning the future vector of 'target' values of a time series in a dataset for some 'prediction_length' time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the 'validation' split and tested via the 'test' split.",
"### Languages",
"## Dataset Structure",
"### Data Instances\n\nA sample from the training set is provided below:",
"### Data Fields\n\nFor the univariate regular time series each series has the following keys:\n\n* 'start': a datetime of the first entry of each time series in the dataset\n* 'target': an array[float32] of the actual target values\n* 'feat_static_cat': an array[uint64] which contains a categorical identifier of each time series in the dataset\n* 'feat_dynamic_real': optional array of covariate features\n* 'item_id': a string identifier of each time series in a dataset for reference\n\nFor the multivariate time series the 'target' is a vector of the multivariate dimension for each time point.",
"### Data Splits\n\nThe time series data is split into train/val/test set of 12/4/4 months respectively.",
"## Dataset Creation",
"### Curation Rationale\n\nDevelop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n* Haoyi Zhou",
"### Licensing Information\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\nThanks to @kashif for adding this dataset."
] |
7d5a48f50f02568d74fb4d0ca4c333684addc26d | # hotpotExtendedAno-LM
Version of hotpotExtended-Annotated formatted for Language Modelling. | ghomasHudson/hotpotExtendedAnoLM | [
"region:us"
] | 2022-05-05T13:07:24+00:00 | {} | 2022-05-05T13:09:26+00:00 | [] | [] | TAGS
#region-us
| # hotpotExtendedAno-LM
Version of hotpotExtended-Annotated formatted for Language Modelling. | [
"# hotpotExtendedAno-LM\n\nVersion of hotpotExtended-Annotated formatted for Language Modelling."
] | [
"TAGS\n#region-us \n",
"# hotpotExtendedAno-LM\n\nVersion of hotpotExtended-Annotated formatted for Language Modelling."
] |
7d23ca7e27a34288f7083619602a7a902938ead9 |
# Doctor Who Dialogues
This dataset contains all the script lines of Eleventh Doctor from Doctor Who which is a popular TV series. It can be processed and used for chatbots or relevant stuff. | truthisneverlinear/eleventh-doctor-scripts | [
"language:en",
"NLP",
"conservation",
"dialogue",
"region:us"
] | 2022-05-05T13:34:18+00:00 | {"language": "en", "tags": ["NLP", "conservation", "dialogue"]} | 2022-05-05T13:39:18+00:00 | [] | [
"en"
] | TAGS
#language-English #NLP #conservation #dialogue #region-us
|
# Doctor Who Dialogues
This dataset contains all the script lines of Eleventh Doctor from Doctor Who which is a popular TV series. It can be processed and used for chatbots or relevant stuff. | [
"# Doctor Who Dialogues\n\nThis dataset contains all the script lines of Eleventh Doctor from Doctor Who which is a popular TV series. It can be processed and used for chatbots or relevant stuff."
] | [
"TAGS\n#language-English #NLP #conservation #dialogue #region-us \n",
"# Doctor Who Dialogues\n\nThis dataset contains all the script lines of Eleventh Doctor from Doctor Who which is a popular TV series. It can be processed and used for chatbots or relevant stuff."
] |
3997b23648a3961e1e566d976ac9310aa2ac5e20 |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
<!-- - [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions) -->
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:[email protected])
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the visual story telling (or VIST) task. It includes data from 364 languages across 31 language families. There is a mean of 32 stories and median of 2 stories per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 363 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul
## Dataset Statistics
Some of the languages included in the dataset just include 1 or a couple of "stories." For those with higher numbers of available stories we include the following numbers of stories:
| ISO639-3 Code | Stories | Image-Caption Pairs |
|:-----------|----------:|----------------------:|
| ahk | 55 | 493 |
| awa | 163 | 1200 |
| ben | 220 | 1938 |
| bho | 172 | 1163 |
| bis | 21 | 183 |
| brb | 22 | 330 |
| bzi | 66 | 497 |
| cak | 50 | 694 |
| ceb | 394 | 2806 |
| cgc | 182 | 1473 |
| deu | 22 | 250 |
| dty | 172 | 1310 |
| eng | 2187 | 24338 |
| fas | 128 | 620 |
| fil | 34 | 366 |
| fra | 315 | 4350 |
| hat | 224 | 1881 |
| hau | 229 | 1594 |
| ind | 232 | 1866 |
| jra | 56 | 575 |
| kak | 195 | 1416 |
| kek | 21 | 419 |
| khb | 31 | 167 |
| khm | 26 | 246 |
| kir | 278 | 2866 |
| kjb | 63 | 584 |
| kor | 129 | 2732 |
| krr | 29 | 362 |
| lsi | 22 | 173 |
| mai | 177 | 1186 |
| mam | 118 | 1058 |
| mhx | 51 | 544 |
| myk | 22 | 214 |
| nep | 194 | 1464 |
| new | 177 | 1225 |
| pbt | 203 | 979 |
| por | 148 | 2939 |
| quc | 99 | 817 |
| rus | 271 | 2977 |
| snk | 21 | 210 |
| spa | 444 | 5201 |
| swh | 34 | 387 |
| tdg | 31 | 231 |
| tha | 275 | 2929 |
| thl | 185 | 1464 |
| tpi | 137 | 1528 |
| tpu | 28 | 513 |
| zho | 42 | 339 |
## Dataset Structure
### Data Instances
The examples look like this for Hindi:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset("sil-ai/bloom-vist", 'hin')
# An individual samples consists of stories in the specified language code.
# To see a story:
print(dataset['train'][0]['story'])
```
This would produce an output:
```
{'image_id': ['4e9bdde5-996d-4a98-ac1c-d80fb6349314',
'614e4d51-bbdb-4538-98d3-f603c12dccd0',
'970d60bf-2acb-44ac-8ffb-5aa3f7989630',
'd4ad1199-863e-4929-a377-93276fe5caa8',
'0d9ad694-995a-433d-af4e-6f40ddfa208a',
'811176eb-c9f3-4226-8af5-e6c4e524c494',
'83180da7-4ba8-4104-a0d9-49aa2ef48f7a'],
'image_url': ['https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_03_Image_00011.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_04_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_05_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_06_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_07_Image_0001.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_07_Image_00011.png',
'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_09_Image_0001.png'],
'story_index': [0, 1, 2, 3, 4, 5, 6],
'story_id': ['cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6',
'cc34c1c7-c086-491b-8e6a-65572e1efdb6'],
'text': ['साबू ने एक कंकड़ को ठोकर मारी। कंकड़ लुढ़कता हुआ एक पेड़ के पास पहुँचा। पेड़ के तने पर मुलायम बाल थे। साबू ने छुए और ऊपर देखा, ऊपर, ऊपर और उससे भी ऊपर...दो आँखें नीचे देख रही थीं।',
'“हेलो, तुम कौन हो?” साबू को बड़ा अचम्भा हुआ।“हेलो, मैं जिराफ़ हूँ। मेरा नाम है जोजो। \xa0मैं तुम्हारे साथ खेल सकता हूँ। मेरी पीठ पर चढ़ जाओ, मैं तुम्हें घुमा के लाता हूँ।”',
'साबू जोजो की पीठ पर चढ़ गया और वे सड़क पर चल निकले। फिर पहाड़ी पर और शहर के बीचों बीच।\nसाबू खुशी से चिल्लाया, “जोजो दाएँ मुड़ो,\n बाएँ मुड़ो और फिर दाएँ।” अब वे उसकी दोस्त मुन्नी के घर पहुँच गये।',
'आज मुन्नी का जन्मदिन था। साबू को जोजो पर सवारी करते देख बच्चों ने ताली बजायी।\xa0\n जोजो ने गुब्बारे लटकाने में आन्टी की मदद करी क्योंकि वह इतना... लम्बा था।\xa0\n कितना आसान था!',
'जोजो ने सब बच्चों को सवारी कराई।\n उनके साथ बॉल भी खेली। बड़े मज़े की पार्टी थी।सब ने गाया, “हैप्पी बर्थ डे टु यू ।”\n आन्टी ने मेज़ पर समोसे, गुलाब जामुन और आइसक्रीम सजाई।',
'जोजो को आइसक्रीम बहुत पसन्द आई। अंकल उसके लिये एक बाल्टी भर के आइसक्रीम लाये। जोजो ने पूरी बाल्टी ख़त्म कर दी। \xa0अब घर जाने का समय हो गया।\n\nसब ने कहा, “बाय बाय जोजो, बाय बाय साबू।” साबू और जोजो घर लौटे।',
'']}
```
### Data Fields
The metadata fields below are available. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).
- **id**: id of the sample
- **title**: title of the book, e.g. "Going to Buy a Book".
- **license**: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike".
- **album_id**: an ID value corresponding to the set of images corresponding to the given story
- **story**: the sequenced story data including lists of image IDs, image URLs, and corresponding text
### Data Splits
Currently all languages include a train split only. In the future, we will be creating manual splits of the data.
## Changelog
- **6 December 2022** - dataset is made public | sil-ai/bloom-vist | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:afr",
"language:af",
"language:aaa",
"language:abc",
"language:ada",
"language:adq",
"language:aeu",
"language:agq",
"language:ags",
"language:ahk",
"language:aia",
"language:ajz",
"language:aka",
"language:ak",
"language:ame",
"language:amh",
"language:am",
"language:amp",
"language:amu",
"language:ann",
"language:aph",
"language:awa",
"language:awb",
"language:azn",
"language:azo",
"language:bag",
"language:bam",
"language:bm",
"language:baw",
"language:bax",
"language:bbk",
"language:bcc",
"language:bce",
"language:bec",
"language:bef",
"language:ben",
"language:bn",
"language:bfd",
"language:bfm",
"language:bfn",
"language:bgf",
"language:bho",
"language:bhs",
"language:bis",
"language:bi",
"language:bjn",
"language:bjr",
"language:bkc",
"language:bkh",
"language:bkm",
"language:bkx",
"language:bob",
"language:bod",
"language:bo",
"language:boz",
"language:bqm",
"language:bra",
"language:brb",
"language:bri",
"language:brv",
"language:bss",
"language:bud",
"language:buo",
"language:bwt",
"language:bwx",
"language:bxa",
"language:bya",
"language:bze",
"language:bzi",
"language:cak",
"language:cbr",
"language:ceb",
"language:cgc",
"language:chd",
"language:chp",
"language:cim",
"language:clo",
"language:cmn",
"language:zh",
"language:cmo",
"language:csw",
"language:cuh",
"language:cuv",
"language:dag",
"language:ddg",
"language:ded",
"language:deu",
"language:de",
"language:dig",
"language:dje",
"language:dmg",
"language:dnw",
"language:dtp",
"language:dtr",
"language:dty",
"language:dug",
"language:eee",
"language:ekm",
"language:enb",
"language:enc",
"language:eng",
"language:en",
"language:ewo",
"language:fas",
"language:fa",
"language:fil",
"language:fli",
"language:fon",
"language:fra",
"language:fr",
"language:fub",
"language:fuh",
"language:gal",
"language:gbj",
"language:gou",
"language:gsw",
"language:guc",
"language:guj",
"language:gu",
"language:guz",
"language:gwc",
"language:hao",
"language:hat",
"language:ht",
"language:hau",
"language:ha",
"language:hbb",
"language:hig",
"language:hil",
"language:hin",
"language:hi",
"language:hla",
"language:hna",
"language:hre",
"language:hro",
"language:idt",
"language:ilo",
"language:ind",
"language:id",
"language:ino",
"language:isu",
"language:ita",
"language:it",
"language:jgo",
"language:jmx",
"language:jpn",
"language:ja",
"language:jra",
"language:kak",
"language:kam",
"language:kan",
"language:kn",
"language:kau",
"language:kr",
"language:kbq",
"language:kbx",
"language:kby",
"language:kek",
"language:ken",
"language:khb",
"language:khm",
"language:km",
"language:kik",
"language:ki",
"language:kin",
"language:rw",
"language:kir",
"language:ky",
"language:kjb",
"language:kmg",
"language:kmr",
"language:ku",
"language:kms",
"language:kmu",
"language:kor",
"language:ko",
"language:kqr",
"language:krr",
"language:ksw",
"language:kur",
"language:kvt",
"language:kwd",
"language:kwu",
"language:kwx",
"language:kxp",
"language:kyq",
"language:laj",
"language:lan",
"language:lao",
"language:lo",
"language:lbr",
"language:lfa",
"language:lgg",
"language:lgr",
"language:lhm",
"language:lhu",
"language:lkb",
"language:llg",
"language:lmp",
"language:lns",
"language:loh",
"language:lsi",
"language:lts",
"language:lug",
"language:lg",
"language:luy",
"language:lwl",
"language:mai",
"language:mal",
"language:ml",
"language:mam",
"language:mar",
"language:mr",
"language:mdr",
"language:mfh",
"language:mfj",
"language:mgg",
"language:mgm",
"language:mgo",
"language:mgq",
"language:mhx",
"language:miy",
"language:mkz",
"language:mle",
"language:mlk",
"language:mlw",
"language:mmu",
"language:mne",
"language:mnf",
"language:mnw",
"language:mot",
"language:mqj",
"language:mrn",
"language:mry",
"language:msb",
"language:muv",
"language:mve",
"language:mxu",
"language:mya",
"language:my",
"language:myk",
"language:myx",
"language:mzm",
"language:nas",
"language:nco",
"language:nep",
"language:ne",
"language:new",
"language:nge",
"language:ngn",
"language:nhx",
"language:njy",
"language:nla",
"language:nld",
"language:nl",
"language:nlv",
"language:nod",
"language:nsk",
"language:nsn",
"language:nso",
"language:nst",
"language:nuj",
"language:nwe",
"language:nwi",
"language:nxa",
"language:nxl",
"language:nya",
"language:ny",
"language:nyo",
"language:nyu",
"language:nza",
"language:odk",
"language:oji",
"language:oj",
"language:oki",
"language:omw",
"language:ori",
"language:or",
"language:ozm",
"language:pae",
"language:pag",
"language:pan",
"language:pa",
"language:pbt",
"language:pce",
"language:pcg",
"language:pdu",
"language:pea",
"language:pex",
"language:pis",
"language:pkb",
"language:pmf",
"language:pnz",
"language:por",
"language:pt",
"language:psp",
"language:pwg",
"language:qaa",
"language:qub",
"language:quc",
"language:quf",
"language:quz",
"language:qve",
"language:qvh",
"language:qvm",
"language:qvo",
"language:qxh",
"language:rel",
"language:rnl",
"language:ron",
"language:ro",
"language:roo",
"language:rue",
"language:rug",
"language:rus",
"language:ru",
"language:san",
"language:sa",
"language:saq",
"language:sat",
"language:sdk",
"language:sea",
"language:sgd",
"language:shn",
"language:sml",
"language:snk",
"language:snl",
"language:som",
"language:so",
"language:sot",
"language:st",
"language:sox",
"language:spa",
"language:es",
"language:sps",
"language:ssn",
"language:stk",
"language:swa",
"language:sw",
"language:swh",
"language:sxb",
"language:syw",
"language:taj",
"language:tam",
"language:ta",
"language:tbj",
"language:tdb",
"language:tdg",
"language:tdt",
"language:teo",
"language:tet",
"language:tgk",
"language:tg",
"language:tha",
"language:th",
"language:the",
"language:thk",
"language:thl",
"language:thy",
"language:tio",
"language:tkd",
"language:tnl",
"language:tnn",
"language:tnp",
"language:tnt",
"language:tod",
"language:tom",
"language:tpi",
"language:tpl",
"language:tpu",
"language:tsb",
"language:tsn",
"language:tn",
"language:tso",
"language:ts",
"language:tuv",
"language:tuz",
"language:tvs",
"language:udg",
"language:unr",
"language:urd",
"language:ur",
"language:uzb",
"language:uz",
"language:ven",
"language:ve",
"language:vie",
"language:vi",
"language:vif",
"language:war",
"language:wbm",
"language:wbr",
"language:wms",
"language:wni",
"language:wnk",
"language:wtk",
"language:xho",
"language:xh",
"language:xkg",
"language:xmd",
"language:xmg",
"language:xmm",
"language:xog",
"language:xty",
"language:yas",
"language:yav",
"language:ybb",
"language:ybh",
"language:ybi",
"language:ydd",
"language:yea",
"language:yet",
"language:yid",
"language:yi",
"language:yin",
"language:ymp",
"language:zaw",
"language:zho",
"language:zlm",
"language:zuh",
"language:zul",
"language:zu",
"license:cc-by-4.0",
"license:cc-by-nc-4.0",
"license:cc-by-nd-4.0",
"license:cc-by-sa-4.0",
"license:cc-by-nc-nd-4.0",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-05-05T15:25:54+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["afr", "af", "aaa", "abc", "ada", "adq", "aeu", "agq", "ags", "ahk", "aia", "ajz", "aka", "ak", "ame", "amh", "am", "amp", "amu", "ann", "aph", "awa", "awb", "azn", "azo", "bag", "bam", "bm", "baw", "bax", "bbk", "bcc", "bce", "bec", "bef", "ben", "bn", "bfd", "bfm", "bfn", "bgf", "bho", "bhs", "bis", "bi", "bjn", "bjr", "bkc", "bkh", "bkm", "bkx", "bob", "bod", "bo", "boz", "bqm", "bra", "brb", "bri", "brv", "bss", "bud", "buo", "bwt", "bwx", "bxa", "bya", "bze", "bzi", "cak", "cbr", "ceb", "cgc", "chd", "chp", "cim", "clo", "cmn", "zh", "cmo", "csw", "cuh", "cuv", "dag", "ddg", "ded", "deu", "de", "dig", "dje", "dmg", "dnw", "dtp", "dtr", "dty", "dug", "eee", "ekm", "enb", "enc", "eng", "en", "ewo", "fas", "fa", "fil", "fli", "fon", "fra", "fr", "fub", "fuh", "gal", "gbj", "gou", "gsw", "guc", "guj", "gu", "guz", "gwc", "hao", "hat", "ht", "hau", "ha", "hbb", "hig", "hil", "hin", "hi", "hla", "hna", "hre", "hro", "idt", "ilo", "ind", "id", "ino", "isu", "ita", "it", "jgo", "jmx", "jpn", "ja", "jra", "kak", "kam", "kan", "kn", "kau", "kr", "kbq", "kbx", "kby", "kek", "ken", "khb", "khm", "km", "kik", "ki", "kin", "rw", "kir", "ky", "kjb", "kmg", "kmr", "ku", "kms", "kmu", "kor", "ko", "kqr", "krr", "ksw", "kur", "ku", "kvt", "kwd", "kwu", "kwx", "kxp", "kyq", "laj", "lan", "lao", "lo", "lbr", "lfa", "lgg", "lgr", "lhm", "lhu", "lkb", "llg", "lmp", "lns", "loh", "lsi", "lts", "lug", "lg", "luy", "lwl", "mai", "mal", "ml", "mam", "mar", "mr", "mdr", "mfh", "mfj", "mgg", "mgm", "mgo", "mgq", "mhx", "miy", "mkz", "mle", "mlk", "mlw", "mmu", "mne", "mnf", "mnw", "mot", "mqj", "mrn", "mry", "msb", "muv", "mve", "mxu", "mya", "my", "myk", "myx", "mzm", "nas", "nco", "nep", "ne", "new", "nge", "ngn", "nhx", "njy", "nla", "nld", "nl", "nlv", "nod", "nsk", "nsn", "nso", "nst", "nuj", "nwe", "nwi", "nxa", "nxl", "nya", "ny", "nyo", "nyu", "nza", "odk", "oji", "oj", "oki", "omw", "ori", "or", "ozm", "pae", "pag", "pan", "pa", "pbt", "pce", "pcg", "pdu", "pea", "pex", "pis", "pkb", "pmf", "pnz", "por", "pt", "psp", "pwg", "qaa", "qub", "quc", "quf", "quz", "qve", "qvh", "qvm", "qvo", "qxh", "rel", "rnl", "ron", "ro", "roo", "rue", "rug", "rus", "ru", "san", "sa", "saq", "sat", "sdk", "sea", "sgd", "shn", "sml", "snk", "snl", "som", "so", "sot", "st", "sox", "spa", "es", "sps", "ssn", "stk", "swa", "sw", "swh", "sxb", "syw", "taj", "tam", "ta", "tbj", "tdb", "tdg", "tdt", "teo", "tet", "tgk", "tg", "tha", "th", "the", "thk", "thl", "thy", "tio", "tkd", "tnl", "tnn", "tnp", "tnt", "tod", "tom", "tpi", "tpl", "tpu", "tsb", "tsn", "tn", "tso", "ts", "tuv", "tuz", "tvs", "udg", "unr", "urd", "ur", "uzb", "uz", "ven", "ve", "vie", "vi", "vif", "war", "wbm", "wbr", "wms", "wni", "wnk", "wtk", "xho", "xh", "xkg", "xmd", "xmg", "xmm", "xog", "xty", "yas", "yav", "ybb", "ybh", "ybi", "ydd", "yea", "yet", "yid", "yi", "yin", "ymp", "zaw", "zho", "zh", "zlm", "zuh", "zul", "zu"], "license": ["cc-by-4.0", "cc-by-nc-4.0", "cc-by-nd-4.0", "cc-by-sa-4.0", "cc-by-nc-nd-4.0", "cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "pretty_name": "BloomVIST", "extra_gated_prompt": "One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled with a `cc-by-sa` license). A \"license\" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.\n\nThese [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that: \n\n1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:[email protected]).\n2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. \n3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material. \n\nIn addition to the above implied by Creative Commons and when clicking \"Access Repository\" below, you agree: \n\n1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.\n2. That your *contact information* (email address and username) can be shared with the model authors as well.\n ", "extra_gated_fields": {"I have read the License and agree with its terms": "checkbox"}} | 2022-12-06T19:37:53+00:00 | [] | [
"afr",
"af",
"aaa",
"abc",
"ada",
"adq",
"aeu",
"agq",
"ags",
"ahk",
"aia",
"ajz",
"aka",
"ak",
"ame",
"amh",
"am",
"amp",
"amu",
"ann",
"aph",
"awa",
"awb",
"azn",
"azo",
"bag",
"bam",
"bm",
"baw",
"bax",
"bbk",
"bcc",
"bce",
"bec",
"bef",
"ben",
"bn",
"bfd",
"bfm",
"bfn",
"bgf",
"bho",
"bhs",
"bis",
"bi",
"bjn",
"bjr",
"bkc",
"bkh",
"bkm",
"bkx",
"bob",
"bod",
"bo",
"boz",
"bqm",
"bra",
"brb",
"bri",
"brv",
"bss",
"bud",
"buo",
"bwt",
"bwx",
"bxa",
"bya",
"bze",
"bzi",
"cak",
"cbr",
"ceb",
"cgc",
"chd",
"chp",
"cim",
"clo",
"cmn",
"zh",
"cmo",
"csw",
"cuh",
"cuv",
"dag",
"ddg",
"ded",
"deu",
"de",
"dig",
"dje",
"dmg",
"dnw",
"dtp",
"dtr",
"dty",
"dug",
"eee",
"ekm",
"enb",
"enc",
"eng",
"en",
"ewo",
"fas",
"fa",
"fil",
"fli",
"fon",
"fra",
"fr",
"fub",
"fuh",
"gal",
"gbj",
"gou",
"gsw",
"guc",
"guj",
"gu",
"guz",
"gwc",
"hao",
"hat",
"ht",
"hau",
"ha",
"hbb",
"hig",
"hil",
"hin",
"hi",
"hla",
"hna",
"hre",
"hro",
"idt",
"ilo",
"ind",
"id",
"ino",
"isu",
"ita",
"it",
"jgo",
"jmx",
"jpn",
"ja",
"jra",
"kak",
"kam",
"kan",
"kn",
"kau",
"kr",
"kbq",
"kbx",
"kby",
"kek",
"ken",
"khb",
"khm",
"km",
"kik",
"ki",
"kin",
"rw",
"kir",
"ky",
"kjb",
"kmg",
"kmr",
"ku",
"kms",
"kmu",
"kor",
"ko",
"kqr",
"krr",
"ksw",
"kur",
"kvt",
"kwd",
"kwu",
"kwx",
"kxp",
"kyq",
"laj",
"lan",
"lao",
"lo",
"lbr",
"lfa",
"lgg",
"lgr",
"lhm",
"lhu",
"lkb",
"llg",
"lmp",
"lns",
"loh",
"lsi",
"lts",
"lug",
"lg",
"luy",
"lwl",
"mai",
"mal",
"ml",
"mam",
"mar",
"mr",
"mdr",
"mfh",
"mfj",
"mgg",
"mgm",
"mgo",
"mgq",
"mhx",
"miy",
"mkz",
"mle",
"mlk",
"mlw",
"mmu",
"mne",
"mnf",
"mnw",
"mot",
"mqj",
"mrn",
"mry",
"msb",
"muv",
"mve",
"mxu",
"mya",
"my",
"myk",
"myx",
"mzm",
"nas",
"nco",
"nep",
"ne",
"new",
"nge",
"ngn",
"nhx",
"njy",
"nla",
"nld",
"nl",
"nlv",
"nod",
"nsk",
"nsn",
"nso",
"nst",
"nuj",
"nwe",
"nwi",
"nxa",
"nxl",
"nya",
"ny",
"nyo",
"nyu",
"nza",
"odk",
"oji",
"oj",
"oki",
"omw",
"ori",
"or",
"ozm",
"pae",
"pag",
"pan",
"pa",
"pbt",
"pce",
"pcg",
"pdu",
"pea",
"pex",
"pis",
"pkb",
"pmf",
"pnz",
"por",
"pt",
"psp",
"pwg",
"qaa",
"qub",
"quc",
"quf",
"quz",
"qve",
"qvh",
"qvm",
"qvo",
"qxh",
"rel",
"rnl",
"ron",
"ro",
"roo",
"rue",
"rug",
"rus",
"ru",
"san",
"sa",
"saq",
"sat",
"sdk",
"sea",
"sgd",
"shn",
"sml",
"snk",
"snl",
"som",
"so",
"sot",
"st",
"sox",
"spa",
"es",
"sps",
"ssn",
"stk",
"swa",
"sw",
"swh",
"sxb",
"syw",
"taj",
"tam",
"ta",
"tbj",
"tdb",
"tdg",
"tdt",
"teo",
"tet",
"tgk",
"tg",
"tha",
"th",
"the",
"thk",
"thl",
"thy",
"tio",
"tkd",
"tnl",
"tnn",
"tnp",
"tnt",
"tod",
"tom",
"tpi",
"tpl",
"tpu",
"tsb",
"tsn",
"tn",
"tso",
"ts",
"tuv",
"tuz",
"tvs",
"udg",
"unr",
"urd",
"ur",
"uzb",
"uz",
"ven",
"ve",
"vie",
"vi",
"vif",
"war",
"wbm",
"wbr",
"wms",
"wni",
"wnk",
"wtk",
"xho",
"xh",
"xkg",
"xmd",
"xmg",
"xmm",
"xog",
"xty",
"yas",
"yav",
"ybb",
"ybh",
"ybi",
"ydd",
"yea",
"yet",
"yid",
"yi",
"yin",
"ymp",
"zaw",
"zho",
"zlm",
"zuh",
"zul",
"zu"
] | TAGS
#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Afrikaans #language-Ghotuo #language-Ambala Ayta #language-Adangme #language-Adangbe #language-Akeu #language-Aghem #language-Esimbi #language-Akha #language-Arosi #language-Amri Karbi #language-Akan #language-Akan #language-Yanesha' #language-Amharic #language-Amharic #language-Alamblak #language-Guerrero Amuzgo #language-Obolo #language-Athpariya #language-Awadhi #language-Awa (Papua New Guinea) #language-Western Durango Nahuatl #language-Awing #language-Tuki #language-Bambara #language-Bambara #language-Bambili-Bambui #language-Bamun #language-Babanki #language-Southern Balochi #language-Bamenyam #language-Iceve-Maci #language-Benabena #language-Bengali #language-Bengali #language-Bafut #language-Mmen #language-Bunak #language-Bangandu #language-Bhojpuri #language-Buwal #language-Bislama #language-Bislama #language-Banjar #language-Binumarien #language-Baka (Cameroon) #language-Bakoko #language-Kom (Cameroon) #language-Baikeno #language-Aweer #language-Tibetan #language-Tibetan #language-Tiéyaxo Bozo #language-Wumboko #language-Braj #language-Brao #language-Mokpwe #language-Western Bru #language-Akoose #language-Ntcham #language-Terei #language-Bafaw-Balong #language-Bu-Nao Bunu #language-Tairaha #language-Batak #language-Jenaama Bozo #language-Bisu #language-Kaqchikel #language-Cashibo-Cacataibo #language-Cebuano #language-Kagayanen #language-Highland Oaxaca Chontal #language-Chipewyan #language-Cimbrian #language-Lowland Oaxaca Chontal #language-Mandarin Chinese #language-Chinese #language-Central Mnong #language-Swampy Cree #language-Chuka #language-Cuvok #language-Dagbani #language-Fataluku #language-Dedua #language-German #language-German #language-Digo #language-Zarma #language-Upper Kinabatangan #language-Western Dani #language-Kadazan Dusun #language-Lotud #language-Dotyali #language-Duruma #language-E #language-Elip #language-Markweeta #language-En #language-English #language-English #language-Ewondo #language-Persian #language-Persian #language-Filipino #language-Fali #language-Fon #language-French #language-French #language-Adamawa Fulfulde #language-Western Niger Fulfulde #language-Galolen #language-Bodo Gadaba #language-Gavar #language-Swiss German #language-Wayuu #language-Gujarati #language-Gujarati #language-Gusii #language-Gawri #language-Hakö #language-Haitian #language-Haitian #language-Hausa #language-Hausa #language-Huba #language-Kamwe #language-Hiligaynon #language-Hindi #language-Hindi #language-Halia #language-Mina (Cameroon) #language-Hre #language-Haroi #language-Idaté #language-Iloko #language-Indonesian #language-Indonesian #language-Inoke-Yate #language-Isu (Menchum Division) #language-Italian #language-Italian #language-Ngomba #language-Western Juxtlahuaca Mixtec #language-Japanese #language-Japanese #language-Jarai #language-Kalanguya #language-Kamba (Kenya) #language-Kannada #language-Kannada #language-Kanuri #language-Kanuri #language-Kamano #language-Ap Ma #language-Manga Kanuri #language-Kekchí #language-Kenyang #language-Lü #language-Khmer #language-Khmer #language-Kikuyu #language-Kikuyu #language-Kinyarwanda #language-Kinyarwanda #language-Kirghiz #language-Kirghiz #language-Q'anjob'al #language-Kâte #language-Northern Kurdish #language-Kurdish #language-Kamasau #language-Kanite #language-Korean #language-Korean #language-Kimaragang #language-Krung #language-S'gaw Karen #language-Kurdish #language-Lahta Karen #language-Kwaio #language-Kwakum #language-Khirwar #language-Wadiyara Koli #language-Kenga #language-Lango (Uganda) #language-Laru #language-Lao #language-Lao #language-Lohorung #language-Lefa #language-Lugbara #language-Lengo #language-Lhomi #language-Lahu #language-Kabras #language-Lole #language-Limbum #language-Lamnso' #language-Laarim #language-Lashi #language-Tachoni #language-Ganda #language-Ganda #language-Luyia #language-Eastern Lawa #language-Maithili #language-Malayalam #language-Malayalam #language-Mam #language-Marathi #language-Marathi #language-Mandar #language-Matal #language-Mefele #language-Mpumpong #language-Mambae #language-Meta' #language-Malila #language-Maru #language-Ayutla Mixtec #language-Makasae #language-Manambu #language-Ilwana #language-Moloko #language-Mmaala #language-Naba #language-Mundani #language-Mon #language-Barí #language-Mamasa #language-Cheke Holo #language-Mandaya #language-Masbatenyo #language-Muthuvan #language-Marwari (Pakistan) #language-Mada (Cameroon) #language-Burmese #language-Burmese #language-Mamara Senoufo #language-Masaaba #language-Mumuye #language-Naasioi #language-Sibe #language-Nepali (macrolanguage) #language-Nepali (macrolanguage) #language-Newari #language-Ngemba #language-Ngwo #language-Isthmus-Mecayapan Nahuatl #language-Njyem #language-Ngombale #language-Dutch #language-Dutch #language-Orizaba Nahuatl #language-Northern Thai #language-Naskapi #language-Nehan #language-Pedi #language-Tase Naga #language-Nyole #language-Ngwe #language-Southwest Tanna #language-Nauete #language-South Nuaulu #language-Nyanja #language-Nyanja #language-Nyoro #language-Nyungwe #language-Tigon Mbembe #language-Od #language-Ojibwa #language-Ojibwa #language-Okiek #language-South Tairora #language-Oriya (macrolanguage) #language-Oriya (macrolanguage) #language-Koonzime #language-Pagibete #language-Pangasinan #language-Panjabi #language-Panjabi #language-Southern Pashto #language-Ruching Palaung #language-Paniya #language-Kayan #language-Peranakan Indonesian #language-Petats #language-Pijin #language-Pokomo #language-Pamona #language-Pana (Central African Republic) #language-Portuguese #language-Portuguese #language-Philippine Sign Language #language-Gapapaiwa #language-qaa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-Cusco Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-Napo Lowland Quechua #language-Panao Huánuco Quechua #language-Rendille #language-Ranglong #language-Romanian #language-Romanian #language-Rotokas #language-Rusyn #language-Roviana #language-Russian #language-Russian #language-Sanskrit #language-Sanskrit #language-Samburu #language-Santali #language-Sos Kundi #language-Semai #language-Surigaonon #language-Shan #language-Central Sama #language-Soninke #language-Sangil #language-Somali #language-Somali #language-Southern Sotho #language-Southern Sotho #language-Swo #language-Spanish #language-Spanish #language-Saposa #language-Waata #language-Arammba #language-Swahili (macrolanguage) #language-Swahili (macrolanguage) #language-Swahili (individual language) #language-Suba #language-Kagate #language-Eastern Tamang #language-Tamil #language-Tamil #language-Tiang #language-Panchpargania #language-Western Tamang #language-Tetun Dili #language-Teso #language-Tetum #language-Tajik #language-Tajik #language-Thai #language-Thai #language-Chitwania Tharu #language-Tharaka #language-Dangaura Tharu #language-Tha #language-Teop #language-Tukudede #language-Lenakel #language-North Tanna #language-Whitesands #language-Tontemboan #language-Toma #language-Tombulu #language-Tok Pisin #language-Tlacoapa Me'phaa #language-Tampuan #language-Tsamai #language-Tswana #language-Tswana #language-Tsonga #language-Tsonga #language-Turkana #language-Turka #language-Taveta #language-Muduga #language-Mundari #language-Urdu #language-Urdu #language-Uzbek #language-Uzbek #language-Venda #language-Venda #language-Vietnamese #language-Vietnamese #language-Vili #language-Waray (Philippines) #language-Wa #language-Wagdi #language-Wambon #language-Ndzwani Comorian #language-Wanukaka #language-Watakataui #language-Xhosa #language-Xhosa #language-Kagoro #language-Mbudum #language-Mengaka #language-Manado Malay #language-Soga #language-Yoloxochitl Mixtec #language-Nugunu (Cameroon) #language-Yangben #language-Yemba #language-Yakha #language-Yamphu #language-Eastern Yiddish #language-Ravula #language-Yetfa #language-Yiddish #language-Yiddish #language-Riang Lai #language-Yamap #language-Mitla Zapotec #language-Chinese #language-Malay (individual language) #language-Tokano #language-Zulu #language-Zulu #license-cc-by-4.0 #license-cc-by-nc-4.0 #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-cc-by-nc-nd-4.0 #license-cc-by-nc-sa-4.0 #region-us
| Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Description
-------------------
* Homepage: SIL AI
* Point of Contact: SIL AI email
* Source Data: Bloom Library
!logo for Bloom Library !sil-ai logo
Dataset Summary
---------------
Bloom is free, open-source software and an associated website Bloom Library, app, and services developed by SIL International. Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the visual story telling (or VIST) task. It includes data from 364 languages across 31 language families. There is a mean of 32 stories and median of 2 stories per language.
Note: If you speak one of these languages and can help provide feedback or corrections, please let us know!
Note: Although this data was used in the training of the BLOOM model, this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool.
Languages
---------
Of the 500+ languages listed at URL, there are 363 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul
Dataset Statistics
------------------
Some of the languages included in the dataset just include 1 or a couple of "stories." For those with higher numbers of available stories we include the following numbers of stories:
Dataset Structure
-----------------
### Data Instances
The examples look like this for Hindi:
This would produce an output:
### Data Fields
The metadata fields below are available. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).
* id: id of the sample
* title: title of the book, e.g. "Going to Buy a Book".
* license: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike".
* album\_id: an ID value corresponding to the set of images corresponding to the given story
* story: the sequenced story data including lists of image IDs, image URLs, and corresponding text
### Data Splits
Currently all languages include a train split only. In the future, we will be creating manual splits of the data.
Changelog
---------
* 6 December 2022 - dataset is made public
| [
"### Data Instances\n\n\nThe examples look like this for Hindi:\n\n\nThis would produce an output:",
"### Data Fields\n\n\nThe metadata fields below are available. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).\n\n\n* id: id of the sample\n* title: title of the book, e.g. \"Going to Buy a Book\".\n* license: specific license used, e.g. \"cc-by-sa\" for \"Creative Commons, by attribution, share-alike\".\n* album\\_id: an ID value corresponding to the set of images corresponding to the given story\n* story: the sequenced story data including lists of image IDs, image URLs, and corresponding text",
"### Data Splits\n\n\nCurrently all languages include a train split only. In the future, we will be creating manual splits of the data.\n\n\nChangelog\n---------\n\n\n* 6 December 2022 - dataset is made public"
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Afrikaans #language-Ghotuo #language-Ambala Ayta #language-Adangme #language-Adangbe #language-Akeu #language-Aghem #language-Esimbi #language-Akha #language-Arosi #language-Amri Karbi #language-Akan #language-Akan #language-Yanesha' #language-Amharic #language-Amharic #language-Alamblak #language-Guerrero Amuzgo #language-Obolo #language-Athpariya #language-Awadhi #language-Awa (Papua New Guinea) #language-Western Durango Nahuatl #language-Awing #language-Tuki #language-Bambara #language-Bambara #language-Bambili-Bambui #language-Bamun #language-Babanki #language-Southern Balochi #language-Bamenyam #language-Iceve-Maci #language-Benabena #language-Bengali #language-Bengali #language-Bafut #language-Mmen #language-Bunak #language-Bangandu #language-Bhojpuri #language-Buwal #language-Bislama #language-Bislama #language-Banjar #language-Binumarien #language-Baka (Cameroon) #language-Bakoko #language-Kom (Cameroon) #language-Baikeno #language-Aweer #language-Tibetan #language-Tibetan #language-Tiéyaxo Bozo #language-Wumboko #language-Braj #language-Brao #language-Mokpwe #language-Western Bru #language-Akoose #language-Ntcham #language-Terei #language-Bafaw-Balong #language-Bu-Nao Bunu #language-Tairaha #language-Batak #language-Jenaama Bozo #language-Bisu #language-Kaqchikel #language-Cashibo-Cacataibo #language-Cebuano #language-Kagayanen #language-Highland Oaxaca Chontal #language-Chipewyan #language-Cimbrian #language-Lowland Oaxaca Chontal #language-Mandarin Chinese #language-Chinese #language-Central Mnong #language-Swampy Cree #language-Chuka #language-Cuvok #language-Dagbani #language-Fataluku #language-Dedua #language-German #language-German #language-Digo #language-Zarma #language-Upper Kinabatangan #language-Western Dani #language-Kadazan Dusun #language-Lotud #language-Dotyali #language-Duruma #language-E #language-Elip #language-Markweeta #language-En #language-English #language-English #language-Ewondo #language-Persian #language-Persian #language-Filipino #language-Fali #language-Fon #language-French #language-French #language-Adamawa Fulfulde #language-Western Niger Fulfulde #language-Galolen #language-Bodo Gadaba #language-Gavar #language-Swiss German #language-Wayuu #language-Gujarati #language-Gujarati #language-Gusii #language-Gawri #language-Hakö #language-Haitian #language-Haitian #language-Hausa #language-Hausa #language-Huba #language-Kamwe #language-Hiligaynon #language-Hindi #language-Hindi #language-Halia #language-Mina (Cameroon) #language-Hre #language-Haroi #language-Idaté #language-Iloko #language-Indonesian #language-Indonesian #language-Inoke-Yate #language-Isu (Menchum Division) #language-Italian #language-Italian #language-Ngomba #language-Western Juxtlahuaca Mixtec #language-Japanese #language-Japanese #language-Jarai #language-Kalanguya #language-Kamba (Kenya) #language-Kannada #language-Kannada #language-Kanuri #language-Kanuri #language-Kamano #language-Ap Ma #language-Manga Kanuri #language-Kekchí #language-Kenyang #language-Lü #language-Khmer #language-Khmer #language-Kikuyu #language-Kikuyu #language-Kinyarwanda #language-Kinyarwanda #language-Kirghiz #language-Kirghiz #language-Q'anjob'al #language-Kâte #language-Northern Kurdish #language-Kurdish #language-Kamasau #language-Kanite #language-Korean #language-Korean #language-Kimaragang #language-Krung #language-S'gaw Karen #language-Kurdish #language-Lahta Karen #language-Kwaio #language-Kwakum #language-Khirwar #language-Wadiyara Koli #language-Kenga #language-Lango (Uganda) #language-Laru #language-Lao #language-Lao #language-Lohorung #language-Lefa #language-Lugbara #language-Lengo #language-Lhomi #language-Lahu #language-Kabras #language-Lole #language-Limbum #language-Lamnso' #language-Laarim #language-Lashi #language-Tachoni #language-Ganda #language-Ganda #language-Luyia #language-Eastern Lawa #language-Maithili #language-Malayalam #language-Malayalam #language-Mam #language-Marathi #language-Marathi #language-Mandar #language-Matal #language-Mefele #language-Mpumpong #language-Mambae #language-Meta' #language-Malila #language-Maru #language-Ayutla Mixtec #language-Makasae #language-Manambu #language-Ilwana #language-Moloko #language-Mmaala #language-Naba #language-Mundani #language-Mon #language-Barí #language-Mamasa #language-Cheke Holo #language-Mandaya #language-Masbatenyo #language-Muthuvan #language-Marwari (Pakistan) #language-Mada (Cameroon) #language-Burmese #language-Burmese #language-Mamara Senoufo #language-Masaaba #language-Mumuye #language-Naasioi #language-Sibe #language-Nepali (macrolanguage) #language-Nepali (macrolanguage) #language-Newari #language-Ngemba #language-Ngwo #language-Isthmus-Mecayapan Nahuatl #language-Njyem #language-Ngombale #language-Dutch #language-Dutch #language-Orizaba Nahuatl #language-Northern Thai #language-Naskapi #language-Nehan #language-Pedi #language-Tase Naga #language-Nyole #language-Ngwe #language-Southwest Tanna #language-Nauete #language-South Nuaulu #language-Nyanja #language-Nyanja #language-Nyoro #language-Nyungwe #language-Tigon Mbembe #language-Od #language-Ojibwa #language-Ojibwa #language-Okiek #language-South Tairora #language-Oriya (macrolanguage) #language-Oriya (macrolanguage) #language-Koonzime #language-Pagibete #language-Pangasinan #language-Panjabi #language-Panjabi #language-Southern Pashto #language-Ruching Palaung #language-Paniya #language-Kayan #language-Peranakan Indonesian #language-Petats #language-Pijin #language-Pokomo #language-Pamona #language-Pana (Central African Republic) #language-Portuguese #language-Portuguese #language-Philippine Sign Language #language-Gapapaiwa #language-qaa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-Cusco Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-Napo Lowland Quechua #language-Panao Huánuco Quechua #language-Rendille #language-Ranglong #language-Romanian #language-Romanian #language-Rotokas #language-Rusyn #language-Roviana #language-Russian #language-Russian #language-Sanskrit #language-Sanskrit #language-Samburu #language-Santali #language-Sos Kundi #language-Semai #language-Surigaonon #language-Shan #language-Central Sama #language-Soninke #language-Sangil #language-Somali #language-Somali #language-Southern Sotho #language-Southern Sotho #language-Swo #language-Spanish #language-Spanish #language-Saposa #language-Waata #language-Arammba #language-Swahili (macrolanguage) #language-Swahili (macrolanguage) #language-Swahili (individual language) #language-Suba #language-Kagate #language-Eastern Tamang #language-Tamil #language-Tamil #language-Tiang #language-Panchpargania #language-Western Tamang #language-Tetun Dili #language-Teso #language-Tetum #language-Tajik #language-Tajik #language-Thai #language-Thai #language-Chitwania Tharu #language-Tharaka #language-Dangaura Tharu #language-Tha #language-Teop #language-Tukudede #language-Lenakel #language-North Tanna #language-Whitesands #language-Tontemboan #language-Toma #language-Tombulu #language-Tok Pisin #language-Tlacoapa Me'phaa #language-Tampuan #language-Tsamai #language-Tswana #language-Tswana #language-Tsonga #language-Tsonga #language-Turkana #language-Turka #language-Taveta #language-Muduga #language-Mundari #language-Urdu #language-Urdu #language-Uzbek #language-Uzbek #language-Venda #language-Venda #language-Vietnamese #language-Vietnamese #language-Vili #language-Waray (Philippines) #language-Wa #language-Wagdi #language-Wambon #language-Ndzwani Comorian #language-Wanukaka #language-Watakataui #language-Xhosa #language-Xhosa #language-Kagoro #language-Mbudum #language-Mengaka #language-Manado Malay #language-Soga #language-Yoloxochitl Mixtec #language-Nugunu (Cameroon) #language-Yangben #language-Yemba #language-Yakha #language-Yamphu #language-Eastern Yiddish #language-Ravula #language-Yetfa #language-Yiddish #language-Yiddish #language-Riang Lai #language-Yamap #language-Mitla Zapotec #language-Chinese #language-Malay (individual language) #language-Tokano #language-Zulu #language-Zulu #license-cc-by-4.0 #license-cc-by-nc-4.0 #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-cc-by-nc-nd-4.0 #license-cc-by-nc-sa-4.0 #region-us \n",
"### Data Instances\n\n\nThe examples look like this for Hindi:\n\n\nThis would produce an output:",
"### Data Fields\n\n\nThe metadata fields below are available. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).\n\n\n* id: id of the sample\n* title: title of the book, e.g. \"Going to Buy a Book\".\n* license: specific license used, e.g. \"cc-by-sa\" for \"Creative Commons, by attribution, share-alike\".\n* album\\_id: an ID value corresponding to the set of images corresponding to the given story\n* story: the sequenced story data including lists of image IDs, image URLs, and corresponding text",
"### Data Splits\n\n\nCurrently all languages include a train split only. In the future, we will be creating manual splits of the data.\n\n\nChangelog\n---------\n\n\n* 6 December 2022 - dataset is made public"
] |
aa413c82b227dd25308df571e8b9d26e034cf2f7 | # Gcode (Geometric code)
## Details
**Usage:** 3D printing <br>
**Source:** Printables.com <br>
**Slicer:** Prusa <br>
**Category:** Art & Design <br>
**Subcategory:** Sculptures <br>
**Models:** 400 <br>
**Sliced files:** 740 (some models have many) <br>
**Data format:** txt <br>
**Train-test split:** 90/10 <br>
**Size:** 11GB
| ablam/gcode | [
"region:us"
] | 2022-05-05T16:27:08+00:00 | {} | 2022-05-05T18:14:30+00:00 | [] | [] | TAGS
#region-us
| # Gcode (Geometric code)
## Details
Usage: 3D printing <br>
Source: URL <br>
Slicer: Prusa <br>
Category: Art & Design <br>
Subcategory: Sculptures <br>
Models: 400 <br>
Sliced files: 740 (some models have many) <br>
Data format: txt <br>
Train-test split: 90/10 <br>
Size: 11GB
| [
"# Gcode (Geometric code)",
"## Details\n\nUsage: 3D printing <br>\nSource: URL <br>\nSlicer: Prusa <br>\nCategory: Art & Design <br>\nSubcategory: Sculptures <br>\nModels: 400 <br>\nSliced files: 740 (some models have many) <br>\nData format: txt <br>\nTrain-test split: 90/10 <br>\nSize: 11GB"
] | [
"TAGS\n#region-us \n",
"# Gcode (Geometric code)",
"## Details\n\nUsage: 3D printing <br>\nSource: URL <br>\nSlicer: Prusa <br>\nCategory: Art & Design <br>\nSubcategory: Sculptures <br>\nModels: 400 <br>\nSliced files: 740 (some models have many) <br>\nData format: txt <br>\nTrain-test split: 90/10 <br>\nSize: 11GB"
] |
cdb6315d2d523ee509cb915eb37a2ac1f807da2f | # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
## Dataset Description
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
Datasets are in JSON Lines format (each line is a json dictionary).
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
Each entry consists of a dictionary with the following keys:
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
- `"ner_tags"`: the list of NER tags (in IOB2 format)
The following keys are not strictly needed by the participants:
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
## Instructions for Workshop participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/WIESP2022-NER")
```
How to load the data if you cloned the repository locally:
(assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
wiesp_dev_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
```
How to compute your scores on the training data:
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
Requirement to run the scoring scripts:
[NumPy](https://numpy.org/install/)
[scikit-learn](https://scikit-learn.org/stable/install.html)
[seqeval](https://github.com/chakki-works/seqeval#installation)
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
## File list
```
├── WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
├── WIESP2022-NER-DEV.jsonl : 20 samples for development.
├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
├── WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
├── README.MD : this file.
├── tag_definitions.md : short descriptions and examples of the tags used in the task.
└── scoring-scripts/ : scripts used to evaluate submissions.
├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
└── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
```
## Cite as
[Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL)](https://aclanthology.org/2022.wiesp-1.1) (Grezes et al., WIESP 2022)
```python
@inproceedings{grezes-etal-2022-overview,
title = "Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})",
author = "Grezes, Felix and
Blanco-Cuaresma, Sergi and
Allen, Thomas and
Ghosal, Tirthankar",
booktitle = "Proceedings of the first Workshop on Information Extraction from Scientific Publications",
month = "nov",
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wiesp-1.1",
pages = "1--7",
abstract = "In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.",
}
``` | adsabs/WIESP2022-NER | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-05-05T17:31:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "WIESP2022-NER"} | 2023-05-17T18:42:32+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #region-us
| # Dataset for the first <a href="URL style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
## Dataset Description
Datasets with text fragments from astrophysics papers, provided by the NASA Astrophysical Data System with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
Datasets are in JSON Lines format (each line is a json dictionary).
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax")
Each entry consists of a dictionary with the following keys:
- '"unique_id"': a unique identifier for this data sample. Must be included in the predictions.
- '"tokens"': the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
- '"ner_tags"': the list of NER tags (in IOB2 format)
The following keys are not strictly needed by the participants:
- '"ner_ids"': the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
- '"label_studio_id"', '"section"', '"bibcode"': references for internal NASA/ADS use.
## Instructions for Workshop participants:
How to load the data using the Huggingface library:
How to load the data if you cloned the repository locally:
(assuming './URL' is in the current directory, change as needed)
- python (as list of dictionaries):
- into Huggingface (as a Huggingface Dataset):
How to compute your scores on the training data:
1. format your predictions as a list of dictionaries, each with the same '"unique_id"' and '"tokens"' keys from the dataset, as well as the list of predicted NER tags under the '"pred_ner_tags"' key (see 'URL' for an example).
2. pass the references and predictions datasets to the 'compute_MCC()' and 'compute_seqeval()' functions (from the '.py' files with the same names).
Requirement to run the scoring scripts:
NumPy
scikit-learn
seqeval
To get scores on the validation data, zip your predictions file (a single '.jsonl' file formatted following the same instructions as above) and upload the '.zip' file to the Codalabs competition.
## File list
## Cite as
Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL) (Grezes et al., WIESP 2022)
| [
"# Dataset for the first <a href=\"URL style=\"color:blue\">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.",
"## Dataset Description\nDatasets with text fragments from astrophysics papers, provided by the NASA Astrophysical Data System with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects). \nDatasets are in JSON Lines format (each line is a json dictionary). \nThe datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the \"B-\" and \"I-\" convention from the IOB2 syntax\")\n\nEach entry consists of a dictionary with the following keys:\n- '\"unique_id\"': a unique identifier for this data sample. Must be included in the predictions.\n- '\"tokens\"': the list of tokens (strings) that form the text of this sample. Must be included in the predictions.\n- '\"ner_tags\"': the list of NER tags (in IOB2 format)\n\nThe following keys are not strictly needed by the participants:\n- '\"ner_ids\"': the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json\n- '\"label_studio_id\"', '\"section\"', '\"bibcode\"': references for internal NASA/ADS use.",
"## Instructions for Workshop participants:\nHow to load the data using the Huggingface library:\n \n\nHow to load the data if you cloned the repository locally: \n(assuming './URL' is in the current directory, change as needed)\n- python (as list of dictionaries):\n\n - into Huggingface (as a Huggingface Dataset):\n\n\n\nHow to compute your scores on the training data:\n1. format your predictions as a list of dictionaries, each with the same '\"unique_id\"' and '\"tokens\"' keys from the dataset, as well as the list of predicted NER tags under the '\"pred_ner_tags\"' key (see 'URL' for an example).\n2. pass the references and predictions datasets to the 'compute_MCC()' and 'compute_seqeval()' functions (from the '.py' files with the same names).\n\nRequirement to run the scoring scripts: \nNumPy \nscikit-learn \nseqeval\n\nTo get scores on the validation data, zip your predictions file (a single '.jsonl' file formatted following the same instructions as above) and upload the '.zip' file to the Codalabs competition.",
"## File list",
"## Cite as\nOverview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL) (Grezes et al., WIESP 2022)"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset for the first <a href=\"URL style=\"color:blue\">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.",
"## Dataset Description\nDatasets with text fragments from astrophysics papers, provided by the NASA Astrophysical Data System with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects). \nDatasets are in JSON Lines format (each line is a json dictionary). \nThe datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the \"B-\" and \"I-\" convention from the IOB2 syntax\")\n\nEach entry consists of a dictionary with the following keys:\n- '\"unique_id\"': a unique identifier for this data sample. Must be included in the predictions.\n- '\"tokens\"': the list of tokens (strings) that form the text of this sample. Must be included in the predictions.\n- '\"ner_tags\"': the list of NER tags (in IOB2 format)\n\nThe following keys are not strictly needed by the participants:\n- '\"ner_ids\"': the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json\n- '\"label_studio_id\"', '\"section\"', '\"bibcode\"': references for internal NASA/ADS use.",
"## Instructions for Workshop participants:\nHow to load the data using the Huggingface library:\n \n\nHow to load the data if you cloned the repository locally: \n(assuming './URL' is in the current directory, change as needed)\n- python (as list of dictionaries):\n\n - into Huggingface (as a Huggingface Dataset):\n\n\n\nHow to compute your scores on the training data:\n1. format your predictions as a list of dictionaries, each with the same '\"unique_id\"' and '\"tokens\"' keys from the dataset, as well as the list of predicted NER tags under the '\"pred_ner_tags\"' key (see 'URL' for an example).\n2. pass the references and predictions datasets to the 'compute_MCC()' and 'compute_seqeval()' functions (from the '.py' files with the same names).\n\nRequirement to run the scoring scripts: \nNumPy \nscikit-learn \nseqeval\n\nTo get scores on the validation data, zip your predictions file (a single '.jsonl' file formatted following the same instructions as above) and upload the '.zip' file to the Codalabs competition.",
"## File list",
"## Cite as\nOverview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL) (Grezes et al., WIESP 2022)"
] |
b7b7a73dbc74b681b384048716232b243ad25a99 |
# Dataset Card for ImageCoDe
To get started quickly, load descriptions via:
```
from datasets import load_dataset
examples = load_dataset('BennoKrojer/ImageCoDe')
```
And download `image_sets.zip` for all images sets (each directory consisting of 10 images).
## Dataset Description
- **Homepage & Leaderboard:** https://mcgill-nlp.github.io/imagecode/
- **Repository:** https://github.com/McGill-NLP/imagecode
- **Paper:** https://arxiv.org/abs/2203.15867
- **Point of Contact:** benno DOT krojer ÄT gmail DOT com
### Dataset Summary
We introduce ImageCoDe, a vision-and-language benchmark that requires contextual language understanding in the form of pragmatics, temporality, long descriptions and visual nuances. The task: Given a detailed description, retrieve the target image among 10 minimally contrastive images. ImageCoDe contains 21K descriptions and 94K images. THe images are primarily frames based on video datasets.
## Dataset Structure
### Data Instances
An instance contains a description, the corresponding image set name, and the target index:
```
{"image_set": "video-storytelling-videowedding_de8dLXvgV-I-shot6_0",
"image_index": "8",
"description": "The flowers the woman in the teal strapless dress is carrying are completely obscured by the man in the black shirt's head. "}
```
### Data Splits
| Dataset Split | Number of Descriptions in Split |
| ------------- |----------------------------- |
| Train | 16,594 |
| Validation | 2,302 |
| Test | 2,306 |
## Dataset Creation
### Curation Rationale
The main goal of ImageCoDe is to highlight weaknesses of recent Vision-and-Language models regarding complex language and fine-grained visual representations. In addition, we found that the dataset offers plenty of pragmatic examples and is therefore suitable for studying pragmatics. | BennoKrojer/ImageCoDe | [
"license:afl-3.0",
"arxiv:2203.15867",
"region:us"
] | 2022-05-05T20:50:13+00:00 | {"license": "afl-3.0"} | 2022-05-13T20:26:08+00:00 | [
"2203.15867"
] | [] | TAGS
#license-afl-3.0 #arxiv-2203.15867 #region-us
| Dataset Card for ImageCoDe
==========================
To get started quickly, load descriptions via:
And download 'image\_sets.zip' for all images sets (each directory consisting of 10 images).
Dataset Description
-------------------
* Homepage & Leaderboard: URL
* Repository: URL
* Paper: URL
* Point of Contact: benno DOT krojer ÄT gmail DOT com
### Dataset Summary
We introduce ImageCoDe, a vision-and-language benchmark that requires contextual language understanding in the form of pragmatics, temporality, long descriptions and visual nuances. The task: Given a detailed description, retrieve the target image among 10 minimally contrastive images. ImageCoDe contains 21K descriptions and 94K images. THe images are primarily frames based on video datasets.
Dataset Structure
-----------------
### Data Instances
An instance contains a description, the corresponding image set name, and the target index:
### Data Splits
Dataset Creation
----------------
### Curation Rationale
The main goal of ImageCoDe is to highlight weaknesses of recent Vision-and-Language models regarding complex language and fine-grained visual representations. In addition, we found that the dataset offers plenty of pragmatic examples and is therefore suitable for studying pragmatics.
| [
"### Dataset Summary\n\n\nWe introduce ImageCoDe, a vision-and-language benchmark that requires contextual language understanding in the form of pragmatics, temporality, long descriptions and visual nuances. The task: Given a detailed description, retrieve the target image among 10 minimally contrastive images. ImageCoDe contains 21K descriptions and 94K images. THe images are primarily frames based on video datasets.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn instance contains a description, the corresponding image set name, and the target index:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe main goal of ImageCoDe is to highlight weaknesses of recent Vision-and-Language models regarding complex language and fine-grained visual representations. In addition, we found that the dataset offers plenty of pragmatic examples and is therefore suitable for studying pragmatics."
] | [
"TAGS\n#license-afl-3.0 #arxiv-2203.15867 #region-us \n",
"### Dataset Summary\n\n\nWe introduce ImageCoDe, a vision-and-language benchmark that requires contextual language understanding in the form of pragmatics, temporality, long descriptions and visual nuances. The task: Given a detailed description, retrieve the target image among 10 minimally contrastive images. ImageCoDe contains 21K descriptions and 94K images. THe images are primarily frames based on video datasets.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn instance contains a description, the corresponding image set name, and the target index:",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe main goal of ImageCoDe is to highlight weaknesses of recent Vision-and-Language models regarding complex language and fine-grained visual representations. In addition, we found that the dataset offers plenty of pragmatic examples and is therefore suitable for studying pragmatics."
] |
3adb85380e4645c1fcf914a21ba4013410c0fa2b | # test-imagefolder-metadata | nateraw/test-imagefolder-metadata | [
"region:us"
] | 2022-05-05T23:18:59+00:00 | {} | 2022-05-06T00:51:57+00:00 | [] | [] | TAGS
#region-us
| # test-imagefolder-metadata | [
"# test-imagefolder-metadata"
] | [
"TAGS\n#region-us \n",
"# test-imagefolder-metadata"
] |
Subsets and Splits