Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-07-20 15:51:01
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-07-20 15:38:59
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
konwoo/lte-ctx16-fs1-np16-lr1e-05
|
konwoo
|
2025-05-06T22:17:43Z
| 0 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-06T22:17:39Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
- name: p_hat_log_probs
dtype: float32
- name: num_tokens
dtype: float32
splits:
- name: train
num_bytes: 10789651
num_examples: 128000
download_size: 8568699
dataset_size: 10789651
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alisawuffles/WANLI
|
alisawuffles
|
2022-11-21T17:31:56Z
| 179 | 10 |
[
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2201.05955",
"region:us"
] |
[
"text-classification"
] |
2022-04-21T00:57:25Z
| 0 |
---
annotations_creators:
- crowdsourced
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: WANLI
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WANLI homepage](https://wanli.allenai.org/)
- **Repository:** [Github repo](https://github.com/alisawuffles/wanli)
- **Paper:** [arXiv](https://arxiv.org/abs/2201.05955)
- **Point of Contact:** [Alisa Liu](mailto:[email protected])
### Dataset Summary
WANLI (**W**orker-**A**I Collaboration for **NLI**) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).
Each example is created by first identifying a "pocket" of examples in [MultiNLI (Williams et al., 2018)](https://cims.nyu.edu/~sbowman/multinli/) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.
The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators.
WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%.
Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision.
### Languages
The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States.
## Dataset Structure
### Data Instances
Here is an example of an NLI example in `data/wanli/train.jsonl` or `data/wanli/test.jsonl`.
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
We also release the raw annotations for each worker, which can be found in `data/wanli/anonymized_annotations.jsonl`.
```
"WorkerId": "EUJ",
"id": 271560,
"nearest_neighbors": [
309783,
202988,
145310,
98030,
148759
],
"premise": "I don't know what I'd do without my cat. He is my only friend.",
"hypothesis": "I would be alone.",
"label": "neutral",
"revised_premise": "I don't know what I'd do without my cat. He is my only friend.",
"revised_hypothesis": "I would be alone without my cat.",
"gold": "entailment",
"revised": true
```
- `WorkerId`: a unique identification for each crowdworker (NOT the real worker ID from AMT)
- `id`: id of generated example
- `nearest_neighbors`: ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in `mnli/train.jsonl`.
- `premise`: GPT-3 generated premise
- `hypothesis`: GPT-3 generated hypothesis
- `label`: the shared label of the in-context examples, which is the "intended" label for this generation
- `revised_premise`: premise after human review
- `revised_hypothesis`: hypothesis after human review
- `gold`: annotator-assigned gold label for the (potentially revised) example
- `revised`: whether the example was revised
### Data Splits
The dataset is randomly split into a *train* and *test* set.
| | train | test |
|-------------------------|------:|-----:|
| Examples | 102885| 5000|
## Dataset Creation
### Curation Rationale
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label.
### Source Data
#### Initial Data Collection and Normalization
Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from [Swayamdipta et al. (2020)](https://aclanthology.org/2020.emnlp-main.746/) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.
#### Who are the source language producers?
The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk.
Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.
### Annotations
#### Annotation process
Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it.
Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper.
Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example.
For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement.
#### Who are the annotators?
Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States.
300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems.
### Discussion of Biases
Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language.
To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive.
Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset.
## Additional Information
### Dataset Curators
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
```
|
HungVu2003/opt-350m_beta_0.0_alpha_0.6_num-company_2_dataset_1_for_gen_2
|
HungVu2003
|
2025-04-08T09:40:28Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-08T09:40:26Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 675803
num_examples: 8750
download_size: 440253
dataset_size: 675803
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vimax97/product-captioned-dataset-synthetic_marble
|
Vimax97
|
2025-03-28T21:45:53Z
| 22 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-28T21:45:46Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 15369713.0
num_examples: 18
download_size: 13650317
dataset_size: 15369713.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b8692b64-f8cf-4ee1-a0bf-021748469668
|
argilla-internal-testing
|
2024-11-19T12:47:20Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-19T12:47:19Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/openmathreasoning_0.3k
|
mlfoundations-dev
|
2025-04-28T04:41:55Z
| 20 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-28T04:41:47Z
| 0 |
---
dataset_info:
features:
- name: expected_answer
dtype: string
- name: problem_type
dtype: string
- name: problem_source
dtype: string
- name: generation_model
dtype: string
- name: pass_rate_72b_tir
dtype: string
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: inference_mode
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 14116830.4179433
num_examples: 316
download_size: 6168716
dataset_size: 14116830.4179433
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/aimo-validation-aime_gpt-4o_responses_train
|
Asap7772
|
2025-01-28T04:56:17Z
| 62 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-28T04:56:13Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: url
dtype: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 196108955
num_examples: 90
download_size: 84070225
dataset_size: 196108955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rediska0123/bio-claim-human-anno
|
rediska0123
|
2024-12-03T18:10:56Z
| 28 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-03T17:12:48Z
| 0 |
---
configs:
- config_name: default
data_files:
- split: en
path: data/en-*
- split: ru
path: data/ru-*
- split: ar
path: data/ar-*
- split: zh
path: data/zh-*
dataset_info:
features:
- name: subject
dtype: string
- name: claim
dtype: string
- name: sentence
dtype: string
- name: GPT class
dtype: bool
- name: human1 class
dtype: bool
- name: human2 class
dtype: bool
- name: human3 class
dtype: bool
splits:
- name: en
num_bytes: 19204
num_examples: 100
- name: ru
num_bytes: 143550
num_examples: 140
- name: ar
num_bytes: 126175
num_examples: 478
- name: zh
num_bytes: 8667
num_examples: 100
download_size: 135485
dataset_size: 297596
---
# Dataset Card for "bio-claim-human-anno"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amatallah/powerbank-solarpanel-module
|
amatallah
|
2025-04-22T19:19:19Z
| 18 | 0 |
[
"license:creativeml-openrail-m",
"region:us"
] |
[] |
2025-04-22T19:19:19Z
| 0 |
---
license: creativeml-openrail-m
---
|
August4293/gsm8k_dense_rewards_filtered_batch_1
|
August4293
|
2025-05-19T14:30:56Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-19T14:30:53Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: ground_truth
dtype: int64
splits:
- name: train
num_bytes: 15740
num_examples: 60
download_size: 11621
dataset_size: 15740
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mchl914/panacea_articles_3
|
mchl914
|
2025-04-02T21:01:19Z
| 13 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-02T21:00:25Z
| 0 |
---
dataset_info:
features:
- name: title
dtype: string
- name: uuid
dtype: string
- name: pmc_id
dtype: string
- name: search_term
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1676190369
num_examples: 36745
download_size: 624742031
dataset_size: 1676190369
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kh4dien/stories-preference-raw
|
kh4dien
|
2025-03-11T19:14:24Z
| 18 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-11T19:14:17Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: original
dtype: string
- name: unintended
dtype: string
splits:
- name: train
num_bytes: 46754744
num_examples: 20000
- name: test
num_bytes: 1927122
num_examples: 1000
download_size: 25055758
dataset_size: 48681866
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
colabfit/PWMLFF_feature_comparison_NPJ2023
|
colabfit
|
2025-04-23T18:12:54Z
| 13 | 0 |
[
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"molecular dynamics",
"mlip",
"interatomic potential"
] |
[] |
2025-04-01T18:21:09Z
| 0 |
---
configs:
- config_name: default
data_files: "main/*.parquet"
license: cc-by-4.0
tags:
- molecular dynamics
- mlip
- interatomic potential
pretty_name: PWMLFF feature comparison NPJ2023
---
# Dataset
PWMLFF feature comparison NPJ2023
### Description
Partial dataset for "Accuracy evaluation of different machine learning force field features". The included data is limited to that hosted directly on the repository at the related GitHub link. From publication abstract: Predicting energies and forces using machine learning force field (MLFF) depends on accurate descriptions (features) of chemical environment. Despite the numerous features proposed, there is a lack of controlled comparison among them for their universality and accuracy. In this work, we compared several commonly used feature types for their ability to describe physical systems. These different feature types include cosine feature, Gaussian feature, moment tensor potential (MTP) feature, spectral neighbor analysis potential feature, simplified smooth deep potential with Chebyshev polynomials feature and Gaussian polynomials feature, and atomic cluster expansion feature. We evaluated the training root mean square error (RMSE) for the atomic group energy, total energy, and force using linear regression model regarding to the density functional theory results. We applied these MLFF models to an amorphous sulfur system and carbon systems, and the fitting results show that MTP feature can yield the smallest RMSE results compared with other feature types for either sulfur system or carbon system in the disordered atomic configurations. Moreover, as an extending test of other systems, the MTP feature combined with linear regression model can also reproduce similar quantities along the ab initio molecular dynamics trajectory as represented by Cu systems. Our results are helpful in selecting the proper features for the MLFF development.
<br>Additional details stored in dataset columns prepended with "dataset_".
### Dataset authors
Ting Han, Jie Li, Liping Liu, Fengyu Li, Lin-Wang Wang
### Publication
https://www.doi.org/10.1088/1367-2630/acf2bb
### Original data link
https://github.com/LonxunQuantum/PWMLFF_library/tree/main
### License
CC-BY-4.0
### Number of unique molecular configurations
17255
### Number of atoms
918240
### Elements included
C, H, Mg, Ni, O, Si
### Properties included
energy, atomic forces, cauchy stress
### Cite this dataset
Han, T., Li, J., Liu, L., Li, F., and Wang, L. _PWMLFF feature comparison NPJ2023_. ColabFit, 2024. https://doi.org/10.60732/209e0c9c
|
birsapula/so101_test
|
birsapula
|
2025-06-04T17:24:48Z
| 0 | 0 |
[
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-06-04T17:17:31Z
| 0 |
---
license: cc-by-nc-4.0
---
|
tttx/3k_forcing_022225_1500_buffer
|
tttx
|
2025-02-23T02:44:09Z
| 17 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-23T02:23:52Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 32455866.624122526
num_examples: 1500
- name: test
num_bytes: 16330
num_examples: 1
download_size: 8940716
dataset_size: 32472196.624122526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
akhooli/ar_mmarco_dfs200k_s
|
akhooli
|
2024-11-27T17:39:24Z
| 18 | 0 |
[
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-27T15:42:20Z
| 0 |
---
license: mit
dataset_info:
features:
- name: query_id
dtype: int64
- name: document_ids
sequence: string
- name: scores
sequence: float64
splits:
- name: train
num_bytes: 123917601
num_examples: 200000
download_size: 67240710
dataset_size: 123917601
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fancyzhx/amazon_polarity
|
fancyzhx
|
2024-01-09T12:23:33Z
| 4,531 | 46 |
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1509.01626",
"region:us"
] |
[
"text-classification"
] |
2022-03-02T23:29:22Z
| 1 |
---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Amazon Review Polarity
dataset_info:
config_name: amazon_polarity
features:
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1604364432
num_examples: 3600000
- name: test
num_bytes: 178176193
num_examples: 400000
download_size: 1145430497
dataset_size: 1782540625
configs:
- config_name: amazon_polarity
data_files:
- split: train
path: amazon_polarity/train-*
- split: test
path: amazon_polarity/test-*
default: true
train-eval-index:
- config: amazon_polarity
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
content: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for Amazon Review Polarity
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://registry.opendata.aws/
- **Repository:** https://github.com/zhangxiangxiao/Crepe
- **Paper:** https://arxiv.org/abs/1509.01626
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Xiang Zhang](mailto:[email protected])
### Dataset Summary
The Amazon reviews dataset consists of reviews from amazon.
The data span a period of 18 years, including ~35 million reviews up to March 2013.
Reviews include product and user information, ratings, and a plaintext review.
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the content and the title, predict the correct star rating.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A typical data point, comprises of a title, a content and the corresponding label.
An example from the AmazonPolarity test set looks as follows:
```
{
'title':'Great CD',
'content':"My lovely Pat has one of the GREAT voices of her generation. I have listened to this CD for YEARS and I still LOVE IT. When I'm in a good mood it makes me feel better. A bad mood just evaporates like sugar in the rain. This CD just oozes LIFE. Vocals are jusat STUUNNING and lyrics just kill. One of life's hidden gems. This is a desert isle CD in my book. Why she never made it big is just beyond me. Everytime I play this, no matter black, white, young, old, male, female EVERYBODY says one thing ""Who was that singing ?""",
'label':1
}
```
### Data Fields
- 'title': a string containing the title of the review - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'label': either 1 (positive) or 0 (negative) rating.
### Data Splits
The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples.
## Dataset Creation
### Curation Rationale
The Amazon reviews polarity dataset is constructed by Xiang Zhang ([email protected]). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Apache License 2.0
### Citation Information
McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)
### Contributions
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
|
badger-lord/embeddings2025
|
badger-lord
|
2025-06-06T00:55:57Z
| 7 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-06-06T00:44:02Z
| 0 |
---
license: apache-2.0
---
|
ferrazzipietro/IK_llama3.1-8b_diann_16_64_0.01
|
ferrazzipietro
|
2024-12-13T08:11:32Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-13T08:11:30Z
| 0 |
---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 698000
num_examples: 364
- name: test
num_bytes: 879056
num_examples: 480
download_size: 678876
dataset_size: 1577056
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
UE-CESE/prochaines-etapes-pour-des-societes-plus-cohesives
|
UE-CESE
|
2025-06-22T12:15:44Z
| 0 | 0 |
[
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] |
[
"translation"
] |
2025-06-22T12:12:43Z
| 0 |
---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.eesc.europa.eu/fr/our-work/publications-other-work/publications/prochaines-etapes-pour-des-societes-plus-cohesives
## Description
Ces dernières années, différentes crises dont les contours s’enchevêtrent, allant des effets persistants de la pandémie à l’escalade des enjeux liés au changement climatique, en passant par l’augmentation du coût de la vie et l’aggravation des inégalités, nourrissent une généralisation de la polarisation. L’instabilité sociale, les difficultés économiques et le mécontentement politique, en particulier parmi celles et ceux qui estiment ne pas être entendus et se sentent laissés pour compte, ont encore aggravé les fractures au sein de la société.
Pour répondre à ces préoccupations urgentes, la Semaine de la société civile 2025 a rassemblé un large éventail de parties prenantes de toute l’Europe et des pays candidats, ainsi que des experts, des responsables politiques de l’Union, des représentants des conseils économiques et sociaux nationaux et de la jeunesse ou encore des journalistes.
Pendant quatre jours, l’événement a donné à plus de 800 participants une occasion unique de s’engager dans des discussions animées, de partager des bonnes pratiques et d’œuvrer ensemble à l’élaboration de solutions visant à renforcer la cohésion sociale et la participation démocratique.
S’il est une leçon à tirer de cette semaine, c’est que le pouvoir de la société civile d’impulser le changement n’est pas moins formidable que les défis auxquels nous sommes confrontés. L’Europe doit agir rapidement et avec audace, en défendant sa société civile dynamique, pilier d’une démocratie durable.
Au cours des quatorze sessions de la conférence organisées par les membres du Groupe de liaison du CESE et les partenaires de la Journée de l’Initiative Citoyenne Européenne, notamment lors de la cérémonie de remise du prix de la société civile du CESE, les acteurs de la société civile ont élaboré un arsenal complet de mesures réalisables et de demandes clés pour une Europe plus cohésive et plus résiliente.
|
TIMBER-Lab/Qwen2.5-7B-Instruct-Turbo_still_math_r1_10_selected
|
TIMBER-Lab
|
2025-05-03T15:54:58Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-03T07:39:40Z
| 0 |
---
dataset_info:
features:
- name: ids
dtype: int64
- name: queries
dtype: string
- name: samples
sequence: string
- name: references
dtype: string
splits:
- name: train
num_bytes: 62419060
num_examples: 1750
download_size: 23601013
dataset_size: 62419060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/opencodereasoning_100k
|
mlfoundations-dev
|
2025-04-28T04:27:30Z
| 30 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-28T04:23:57Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: string
- name: solution
dtype: string
- name: index
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 6231466281.494176
num_examples: 100000
download_size: 2661859785
dataset_size: 6231466281.494176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
umuttbariss/dataset
|
umuttbariss
|
2025-04-23T18:42:26Z
| 20 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-23T17:51:43Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 60439123.199000984
num_examples: 121259
- name: test
num_bytes: 6715845.800999013
num_examples: 13474
download_size: 2971964
dataset_size: 67154969.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Tanxunze/WhosAya-Dataset
|
Tanxunze
|
2025-02-20T01:55:57Z
| 21 | 0 |
[
"task_categories:image-classification",
"license:mit",
"size_categories:n<1K",
"format:text",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"Aya-Maruyama"
] |
[
"image-classification"
] |
2025-02-20T01:33:49Z
| 0 |
---
license: mit
task_categories:
- image-classification
tags:
- Aya-Maruyama
size_categories:
- n<1K
---
# Look Description in [WhosAya](https://huggingface.co/Tanxunze/WhosAya) Model
|
anttip/Tunesets_Edu_v2
|
anttip
|
2025-06-15T09:39:21Z
| 0 | 1 |
[
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-15T07:12:28Z
| 0 |
---
license: mit
---
# Tunesets_Edu_v2
A filtered high-quality dataset blend for finetuning education-domain LLMs. The task focus is on non-reasoning instruction following, mostly around <16k context. The domain focus in on non-code and non-math tasks, including multi-lingual data. This dataset filters and samples data from following datasets:
- [arcee-ai/The-Tome](https://huggingface.co/datasets/arcee-ai/The-Tome)
- [microsoft/orca-agentinstruct-1M-v1](https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1)
- [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk)
- [CohereLabs/aya_collection_language_split](https://huggingface.co/datasets/CohereLabs/aya_collection_language_split)
- [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
- [arcee-ai/EvolKit-75K](https://huggingface.co/datasets/arcee-ai/EvolKit-75K)
- [MaziyarPanahi/Llama-Nemotron-Post-Training-Dataset-v1-ShareGPT](https://huggingface.co/datasets/MaziyarPanahi/Llama-Nemotron-Post-Training-Dataset-v1-ShareGPT)
- [TIGER-Lab/WebInstruct-CFT](https://huggingface.co/datasets/TIGER-Lab/WebInstruct-CFT)
- [prometheus-eval/Feedback-Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection)
- [prometheus-eval/Preference-Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection)
- [argilla/magpie-ultra-v1.0](https://huggingface.co/datasets/argilla/magpie-ultra-v1.0)
- [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
A subset of languages from aya_collection_language_split were selected to form a new dataset "aya_collection_merged":
french, german, spanish, italian, indonesian, japanese, chinese, standard_arabic, dutch, greek, korean, standard_malay, maori, portuguese, samoan, thai, turkish
The data from the datasets is exactly as in the originals. Only filtering and sampling has been applied to get a higher-quality dataset.
The datasets were processed in the order:
1. Rule-based noise and length filtering
2. Deduplication of conversations using Minhash and string similarities
3. Filtering and balanced sampling based on LLM classifications
The datasets rows were classified using AWQ-quantized versions Arcee AI finetunes:
- [arcee-ai/SuperNova-Medius](https://huggingface.co/AMead10/SuperNova-Medius-AWQ)
- [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz-AWQ)
The following prompt was used, and the classifications between these two models were merged.
```
You are a senior data analyst. The following is a discussion between a human user and AI assistant. Evaluate the discussion and the performance of the AI, and fill the following json template:
{
"discussion_language": # Main language of the discussion.
"discussion_category": # Task category of the discussion. 1 or 2 keywords.
"response_difficulty": # Level of expertise required in the topic. Easy/Medium/Hard
"response_quality": # Quality of the assistant's responses. Bad/Average/Good
"response_complete": # The AI gives complete responses to the requests. Yes/No
"response_errors": # The AI responses contain a clear error. Yes/No
"response_concise": # The AI responses are concise with no irrelevant parts. Yes/No
"overall_grade": # Overall grade of the discussion as LLM finetuning data. From 1 to 5, where 1 is useless, 5 is perfect.
}
Don't give any explanations, just fill the above json template. Here's the discussion to evaluate:
```
Row frequencies of the source repositories in the resulting sample:
```
CohereForAI/aya_collection_merged 881241
MaziyarPanahi/Llama-Nemotron-Post-Training-Dataset-v1-ShareGPT 497537
microsoft/orca-agentinstruct-1M-v1 490124
arcee-ai/The-Tome 402592
TIGER-Lab/WebInstruct-CFT 279564
argilla/magpie-ultra-v1.0 265875
HuggingFaceTB/smoltalk 232562
teknium/OpenHermes-2.5 204428
prometheus-eval/Preference-Collection 160068
HuggingFaceH4/ultrachat_200k 122247
arcee-ai/EvolKit-75K 47519
prometheus-eval/Feedback-Collection 33265
LDJnr/Capybara 4216
```
The top 20 most common categories in the dataset:
```
Document Summary 66346
News Summary 47168
Physics, Mathematics 42340
Geometry, Mathematics 23482
Probability, Statistics 19953
Mathematics, Geometry 19668
Mathematics, Calculus 19301
Data Analysis, Evaluation 18218
Text Classification 18161
Historical Summary 17555
Sports, Football 17137
Biology, Genetics 16669
Mathematics, Education 16571
History, Politics 16258
Math Problem 15891
Data Analysis, Statistics 15171
Creative Writing, Character Development 13734
Mathematics, Data Analysis 13242
Historical Analysis 12695
History, Military 12679
```
|
Factral/lacuna_malariav5
|
Factral
|
2024-11-03T15:18:54Z
| 29 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-03T15:15:12Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: int64
- name: category
sequence: int64
- name: id
sequence: int64
- name: area
sequence: int64
- name: image_id
dtype: int64
splits:
- name: train
num_bytes: 2882879915.208
num_examples: 2472
- name: validation
num_bytes: 333119955.0
num_examples: 275
- name: test
num_bytes: 333119955.0
num_examples: 275
download_size: 3351293227
dataset_size: 3549119825.208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
andreuka18/Nemotron-Post-Training-Dataset-10k-Nemotron-Nano-v1
|
andreuka18
|
2025-06-24T17:37:24Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-24T17:37:13Z
| 0 |
---
dataset_info:
features:
- name: input
list:
- name: role
dtype: string
- name: content
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 237074579
num_examples: 10000
download_size: 107411220
dataset_size: 237074579
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/w2r125k_r2r60k_r150k_ep3_tmp07
|
1231czx
|
2025-01-07T08:27:08Z
| 42 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-07T08:09:27Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 19270940
num_examples: 5000
download_size: 6069442
dataset_size: 19270940
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
orhunt/synthetic_slds_2048x2048
|
orhunt
|
2025-06-23T13:58:22Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-23T13:57:21Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: graph
dtype: string
splits:
- name: train
num_bytes: 36129310.0
num_examples: 744
- name: validation
num_bytes: 6331981.0
num_examples: 131
download_size: 29899558
dataset_size: 42461291.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
shaznin/task2_severity_prediction
|
shaznin
|
2025-01-25T05:44:46Z
| 12 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-25T05:10:19Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 31295366
num_examples: 8040
- name: test
num_bytes: 8084663
num_examples: 2010
download_size: 16139448
dataset_size: 39380029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ZhaoYeP/IncivilityCaps
|
ZhaoYeP
|
2025-03-04T15:17:13Z
| 13 | 1 |
[
"task_categories:text-generation",
"task_categories:image-to-text",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"text-generation",
"image-to-text"
] |
2024-11-07T00:27:34Z
| 0 |
---
license: apache-2.0
language:
- en
size_categories:
- 10B<n<100B
task_categories:
- text-generation
- image-to-text
dataset_info:
- config_name: IncivilityCaps-8k
features:
- name: id
dtype: string
- name: file_name
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: pt_type
dtype: string
- name: insp_std
dtype: string
- name: date
dtype: timestamp[s]
format: '%Y-%m-%d'
- config_name: IncivilityCaps-40k
features:
- name: id
dtype: string
- name: file_name
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: pt_type
dtype: string
- name: insp_std
dtype: string
- name: date
dtype: timestamp[s]
format: '%Y-%m-%d'
configs:
- config_name: IncivilityCaps-8k
data_files:
- split: train
path: IncivilityCaps-8k/train/train_*
- split: test
path: IncivilityCaps-8k/test/test_*
- split: val
path: IncivilityCaps-8k/val/val*
- config_name: IncivilityCaps-40k
data_files:
- split: train
path: IncivilityCaps-40k/train/train_*
- split: test
path: IncivilityCaps-40k/test/test_*
- split: val
path: IncivilityCaps-40k/val/val_*
---
|
tdayanov/lerobot_rrwra_data
|
tdayanov
|
2025-06-15T00:37:32Z
| 0 | 0 |
[
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] |
[
"robotics"
] |
2025-06-14T20:04:43Z
| 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# lerobot_rrwra_data
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
kelingwang/causation_strength_rating
|
kelingwang
|
2025-01-15T14:33:43Z
| 19 | 0 |
[
"task_categories:text-classification",
"language:en",
"license:mit",
"size_categories:n<1K",
"region:us",
"causal inference",
"epidemiology",
"medical"
] |
[
"text-classification"
] |
2024-10-07T14:43:59Z
| 0 |
---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- causal inference
- epidemiology
- medical
pretty_name: c
size_categories:
- n<1K
---
|
deddyext/dibi-disaster-data
|
deddyext
|
2025-04-20T14:11:18Z
| 9 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T14:11:11Z
| 0 |
---
dataset_info:
features:
- name: level0
dtype: string
- name: level1
dtype: string
- name: nwil
dtype: string
- name: nprop
dtype: string
- name: nkab
dtype: string
- name: kejadian
struct:
- name: nama
dtype: string
- name: url
dtype: string
- name: tglan
dtype: string
- name: kib
struct:
- name: bulan
dtype: string
- name: hari
dtype: string
- name: indeks
dtype: string
- name: jenis_bencana
dtype: string
- name: kab_kota
dtype: string
- name: kode_provinsi
dtype: string
- name: tahun
dtype: int64
- name: tanggal
dtype: int64
- name: idj
dtype: string
- name: meninggal
dtype: string
- name: hilang
dtype: string
- name: terluka
dtype: string
- name: menderita
dtype: string
- name: mengungsi
dtype: string
- name: rumah_rusak_berat
dtype: string
- name: rumah_rusak_sedang
dtype: string
- name: rumah_rusak_ringan
dtype: string
- name: rumah_terendam
dtype: string
- name: pendidikan
dtype: string
- name: kesehatan
dtype: string
- name: peribadatan
dtype: string
- name: fasum
dtype: string
- name: sql
dtype: string
- name: act
dtype: string
splits:
- name: train
num_bytes: 17573922
num_examples: 53900
download_size: 2229065
dataset_size: 17573922
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tiagoviott/fl
|
tiagoviott
|
2025-02-14T21:09:05Z
| 17 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-13T17:10:15Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 396797
num_examples: 45
download_size: 195790
dataset_size: 396797
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
galgol/quickb-qa
|
galgol
|
2025-02-01T16:27:48Z
| 27 | 0 |
[
"task_categories:text-generation",
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"language:en",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"quickb",
"text-chunking",
"question-generation",
"unknown"
] |
[
"text-generation",
"text-retrieval"
] |
2025-02-01T09:23:31Z
| 0 |
---
language:
- en
pretty_name: "quickb-qa"
tags:
- quickb
- text-chunking
- question-generation
- unknown
task_categories:
- text-generation
- text-retrieval
task_ids:
- document-retrieval
library_name: quickb
---
# quickb-qa
Generated using [QuicKB](https://github.com/AdamLucek/quickb), a tool developed by [Adam Lucek](https://huggingface.co/AdamLucek).
QuicKB optimizes document retrieval by creating fine-tuned knowledge bases through an end-to-end pipeline that handles document chunking, training data generation, and embedding model optimization.
### Question Generation
- **Model**: huggingface/starcoder
- **Deduplication threshold**: 0.85
- **Results**:
- Total questions generated: 0
- Questions after deduplication: 0
### Dataset Structure
- `anchor`: The generated question
- `positive`: The text chunk containing the answer
- `question_id`: Unique identifier for the question
- `chunk_id`: Reference to the source chunk
|
kkchaulagain/LieWaves-EEG
|
kkchaulagain
|
2025-03-30T12:09:23Z
| 15 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-30T12:09:14Z
| 0 |
---
dataset_info:
features:
- name: '0'
dtype: float64
- name: label
dtype: float64
splits:
- name: train
num_bytes: 8294400
num_examples: 518400
download_size: 6988274
dataset_size: 8294400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JK-TK/550_exams
|
JK-TK
|
2025-06-20T10:02:54Z
| 0 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-06-20T09:40:54Z
| 0 |
---
license: apache-2.0
---
|
akahana/rontgen-text-only
|
akahana
|
2025-01-27T01:21:47Z
| 27 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2405.10004",
"region:us"
] |
[] |
2025-01-27T01:18:11Z
| 0 |
---
dataset_info:
features:
- name: image_id
dtype: string
- name: caption
dtype: string
- name: cui
sequence: string
splits:
- name: train
num_bytes: 11668074
num_examples: 59962
- name: validation
num_bytes: 2024347
num_examples: 9904
- name: test
num_bytes: 2028034
num_examples: 9927
download_size: 7123511
dataset_size: 15720455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
## Citation
If you use the ROCOv2 dataset in your research, please cite the following paper:
Pelka, O., Menze, B. H., & Rexhausen, S. E. (2023). Radiology Objects in COntext version 2 (ROCOv2): A multimodal dataset for medical image analysis.
arXiv preprint arXiv:2405.10004.
```latex
@misc {ronan_l.m._2024,
author = { {Ronan L.M.} },
title = { ROCOv2-radiology (Revision 5d66908) },
year = 2024,
url = { https://huggingface.co/datasets/eltorio/ROCOv2-radiology },
doi = { 10.57967/hf/3489 },
publisher = { Hugging Face }
}
```
## License
The ROCOv2 dataset is licensed under the CC BY-NC-SA 4.0 license.
## Acknowledgments
We acknowledge the National Library of Medicine (NLM) for providing access to the PMC Open Access Subset. We also acknowledge the creators of the Medical Concept Annotation Toolkit (MedCAT) for providing a valuable tool for concept extraction and annotation.
|
geoskyr/diagram_image_to_text
|
geoskyr
|
2025-06-20T12:07:35Z
| 5 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-08T16:26:51Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: original_text
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
- name: translated_text
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2880874.0
num_examples: 50
download_size: 2831955
dataset_size: 2880874.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/metamath-hint-sft-topk-4
|
Asap7772
|
2025-03-20T17:48:25Z
| 17 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-20T17:48:19Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: hint
dtype: string
- name: response
dtype: string
splits:
- name: test
num_bytes: 3439595
num_examples: 760
- name: train
num_bytes: 48307860
num_examples: 25420
download_size: 18027217
dataset_size: 51747455
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
GeroldMeisinger/laion2b-en-a65_cogvlm2-4bit_captions
|
GeroldMeisinger
|
2024-08-12T14:05:15Z
| 451 | 6 |
[
"task_categories:image-classification",
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2403.03206",
"region:us",
"CogVLM2",
"CogVLM2-4bit",
"laion2b-en-a65",
"laion-pop",
"stable-diffusion-3-medium"
] |
[
"image-classification",
"text-to-image",
"image-to-text"
] |
2024-06-13T11:52:05Z
| 1 |
---
license: cc-by-nc-sa-4.0
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- image-classification
- text-to-image
- image-to-text
tags:
- CogVLM2
- CogVLM2-4bit
- laion2b-en-a65
- laion-pop
- stable-diffusion-3-medium
pretty_name: laion2B-en aesthetics>=6.5 CogVLM2-4bit captions
---

# Abstract
This dataset contains image captions for the `laion2B-en aesthetics>=6.5` image dataset using `CogVLM2-4bit` with the "laion-pop"-prompt to generate captions which were "likely" used in Stable Diffusion 3 training. From these image captions new synthetic images were generated using `stable-diffusion-3-medium` (`batch-size=8`).
The synthetic images are best viewed locally by cloning this repo with:
```
git lfs install
git clone https://huggingface.co/datasets/GeroldMeisinger/laion2b-en-a65_cogvlm2-4bit_captions
```
(please note that the original images are NOT INCLUDED, see below)
# Status
* laion2B-en aesthetics>=6.5 original images (not included): 635561 (64 parts a ~10000)
* images after filtering and de-duplication (not included, see `imagelist.txt`): 111486
* image captions from originals: 56484 (=parts 00000-00025 only)
* generated images from captions: 8x 2752 = 22016 (=part 00000 only)
*My heart is willing but the hardware is weak!*
# Tasks
* evaluate CogVLM2
* evaluate prompts used for Stable Diffusion 3 training
* evaluate Stable Diffusion 3 image generation and intra-prompt coherence
* evaluate Stable Diffusion 3 prompt comprehension and coherence
* evaluate Stable Diffusion 3 parametrization
* compare generated captions with original images
* compare generated captions with original alt-texts
* compare generated captions of originals versus synthetics
* train models on the original images with synthetic captions
*...or just look at the pretty pictures!*
# File structure
```
00000...00063/ # CogVLM2-4bit captions of the laion2b-en-a65 images
000000001.txt
000000002.txt
...
images_stable-diffusion-3-medium_q80/ # generated images (quality=80%)
cfg_30, cfg_45, cfg_60 # cfg value used
bosh3, dpmpp_2m, euler # sampler used
steps_15, steps_20, steps_28 # step size used
000000001_0.webp # batch number 0
...
000000001_7.webp
captions2images.py # send prompts to ComfyUI to generate images from captions
images2grid.py # display generated images as 2x2 or 3x3 grid
images2reencode.py # compress generated images to lossy
workflow_api.json # workflow for ComfyUI
```
# Reproduction
1. Download the [laion2B-en with aesthetics>=6.5 image dataset](https://laion.ai/blog/laion-aesthetics). Unfortunately the original dataset containing the image links is not available on the official site right now! [LAION is currently in "maintenance mode"](https://laion.ai/notes/laion-maintenance) and (as of June 2024) the process is still on-going.
> LAION has a zero tolerance policy for illegal content and in an abundance of caution, we are temporarily taking down the LAION datasets to ensure they are safe before republishing them."
Thus you have to look for alternative sources if you want to get the `improved_aesthetics_6.5plus.parquet` file (~120MB, `sha256 b10b1b7a60c70a34f6ae5fba662df4c6e893e0e5cb97d070cc7b56cebbd683b2`). Like [DagsHub-Datasets/LAION-Aesthetics-V2-6.5plus](https://dagshub.com/DagsHub-Datasets/LAION-Aesthetics-V2-6.5plus) or [bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images). You can view it with [ParquetViewer](https://github.com/mukunku/ParquetViewer/releases).
This image dataset is incomplete and missing some files because of...:
* Aspect ratio filtering. Any image with aspect ratio > 2 was removed.
* De-duplication. Duplicate images were removed with [Fastdup - image de-duplication library for Python](https://github.com/visual-layer/fastdup) (on default settings) including "semantic" duplicates.
* Captioning takes about 15s per image and after a few days I just stopped.
You can find a full list of images used after filtering and de-duplication in `imagelist.txt`.
2. Install [Taggui - Image captioning UI tool and VLM model downloader](https://github.com/jhc13/taggui) and download [CogVLM2 - AI model for automatic image captioning](https://github.com/THUDM/CogVLM2) within the app. Note that this dataset was created using CogVLM2-4bit (version 2, not version 1!).
3. At [laion-pop](https://laion.ai/blog/laion-pop) we read that they used the following prompt with COGVLM (version 1!). Because Stability.ai and LAION work closely together we can assume that something similar was used for Stable Diffusion 3:
> Can you please describe this image in up to two paragraphs? Please specify any objects within the image, backgrounds, scenery, interactions, and gestures or poses. If they are multiple of any object, please specify how many. Is there text in the image, and if so, what does it say? If there is any lighting in the image, can you identify where it is and what it looks like? What style is the image? If there are people or characters in the image, what emotions are they conveying? Please keep your descriptions factual and terse but complete. DO NOT add any unnecessary speculation about the things that are not part of the image such as "the image is inspiring to viewers" or "seeing this makes you feel joy". DO NOT add things such as "creates a unique and entertaining visual", as these descriptions are interpretations and not a part of the image itself. The description should be purely factual, with no subjective speculation. Make sure to include the style of the image, for example cartoon, photograph, 3d render etc. Start with the words ‘This image showcases’:
>
> ‘This image showcases’ was trimmed from the beginning of each caption upon generation.
In the [Stable Diffusion 3 paper](https://arxiv.org/pdf/2403.03206) we read:
> As synthetic captions may cause a text-to-image model to forget about certain concepts not present in the VLM’s knowledge corpus, we use a ratio of 50 % original and 50 % synthetic captions.
I didn't care about object counting and texts and thus simplified the prompt slightly to this:
> Can you please describe this image in up to two paragraphs? Please specify any objects within the image, backgrounds, scenery, interactions, and gestures or poses. If there is any lighting in the image, can you identify where it is and what it looks like? What style is the image? If there are people or characters in the image, what emotions are they conveying? Please keep your descriptions factual and terse but complete. DO NOT add any unnecessary speculation about the things that are not part of the image such as "the image is inspiring to viewers" or "seeing this makes you feel joy". DO NOT add things such as "creates a unique and entertaining visual", as these descriptions are interpretations and not a part of the image itself. The description should be purely factual, with no subjective speculation. Make sure to include the style of the image, for example cartoon, photograph, 3d render etc.
In taggui I used `token length=512` and `Start caption with: This image showcases` which I later removed with:
```
for i in {18..25}; do
printf -v num "%05d" $i
find "${num}/" -type f -name "*.txt" -exec sed -i 's/^This image showcases //' {} +
done
```
4. Optional: Start [ComfyUI Stable Diffusion server](https://github.com/comfyanonymous/ComfyUI) and run the accompanied `captions2images.py` to generate the synthetic images from the captions. This runs a permutation of `8 x cfg=[3.0, 4.5, 6.0] x samplers=["euler", "dpmpp_2m", "bosh3"] x steps=[15, 20, 28]` which leaves 2752 / 81 ~ 34 image batches to compare. I then ran `images2reencode.py` before upload to compress the images from 28GB to 8GB.
5. Optional: Run `images2grid.py` if you want to view the synthetic images with side-by-side comparison of the originals.
|
Khauneesh/test
|
Khauneesh
|
2025-01-28T14:48:49Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-28T14:48:48Z
| 0 |
---
dataset_info:
features:
- name: Seeds
dtype: string
- name: Prompt
dtype: string
- name: Completion
dtype: string
splits:
- name: train
num_bytes: 3098
num_examples: 3
download_size: 7442
dataset_size: 3098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imperial-cpg/copyright-traps-extra-non-members
|
imperial-cpg
|
2024-10-07T18:15:05Z
| 130 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-07T18:15:01Z
| 0 |
---
dataset_info:
features:
- name: perplexity_bucket
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: seq_len_25
num_bytes: 813824
num_examples: 7425
- name: seq_len_50
num_bytes: 1518773
num_examples: 7500
- name: seq_len_100
num_bytes: 2931380
num_examples: 7500
download_size: 3748263
dataset_size: 5263977
configs:
- config_name: default
data_files:
- split: seq_len_25
path: data/seq_len_25-*
- split: seq_len_50
path: data/seq_len_50-*
- split: seq_len_100
path: data/seq_len_100-*
---
|
Pullo-Africa-Protagonist/NEWDATA
|
Pullo-Africa-Protagonist
|
2025-04-18T14:47:27Z
| 26 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-15T05:30:15Z
| 0 |
---
dataset_info:
- config_name: group_001
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_001
num_bytes: 3664886
num_examples: 1000
download_size: 1808903
dataset_size: 3664886
- config_name: group_002
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_002
num_bytes: 3669107
num_examples: 1000
download_size: 1826467
dataset_size: 3669107
- config_name: group_003
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_003
num_bytes: 3666966
num_examples: 1000
download_size: 1822885
dataset_size: 3666966
- config_name: group_004
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_004
num_bytes: 3641054
num_examples: 1000
download_size: 1790960
dataset_size: 3641054
- config_name: group_005
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_005
num_bytes: 3706546
num_examples: 1000
download_size: 1834176
dataset_size: 3706546
- config_name: group_006
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_006
num_bytes: 3641134
num_examples: 1000
download_size: 1806495
dataset_size: 3641134
- config_name: group_007
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_007
num_bytes: 3624720
num_examples: 1000
download_size: 1787928
dataset_size: 3624720
- config_name: group_008
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_008
num_bytes: 3648575
num_examples: 1000
download_size: 1802821
dataset_size: 3648575
- config_name: group_009
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_009
num_bytes: 3776329
num_examples: 1000
download_size: 1864126
dataset_size: 3776329
- config_name: group_010
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_010
num_bytes: 3743553
num_examples: 1000
download_size: 1852755
dataset_size: 3743553
- config_name: group_011
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_011
num_bytes: 3691265
num_examples: 1000
download_size: 1831889
dataset_size: 3691265
- config_name: group_012
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_012
num_bytes: 3730302
num_examples: 1000
download_size: 1835627
dataset_size: 3730302
- config_name: group_013
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_013
num_bytes: 3670380
num_examples: 1000
download_size: 1817499
dataset_size: 3670380
- config_name: group_014
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_014
num_bytes: 3712956
num_examples: 1000
download_size: 1843771
dataset_size: 3712956
- config_name: group_015
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_015
num_bytes: 3685489
num_examples: 1000
download_size: 1828834
dataset_size: 3685489
- config_name: group_016
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_016
num_bytes: 3600553
num_examples: 1000
download_size: 1782497
dataset_size: 3600553
- config_name: group_017
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_017
num_bytes: 3674265
num_examples: 1000
download_size: 1822922
dataset_size: 3674265
- config_name: group_018
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_018
num_bytes: 3669071
num_examples: 1000
download_size: 1816887
dataset_size: 3669071
- config_name: group_019
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_019
num_bytes: 3705063
num_examples: 1000
download_size: 1836618
dataset_size: 3705063
- config_name: group_020
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_020
num_bytes: 3624252
num_examples: 1000
download_size: 1794243
dataset_size: 3624252
- config_name: group_021
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_021
num_bytes: 3670349
num_examples: 1000
download_size: 1801647
dataset_size: 3670349
- config_name: group_022
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_022
num_bytes: 3669836
num_examples: 1000
download_size: 1826410
dataset_size: 3669836
- config_name: group_023
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_023
num_bytes: 3682315
num_examples: 1000
download_size: 1826539
dataset_size: 3682315
- config_name: group_024
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_024
num_bytes: 3569394
num_examples: 1000
download_size: 1763739
dataset_size: 3569394
- config_name: group_025
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_025
num_bytes: 3669858
num_examples: 1000
download_size: 1809325
dataset_size: 3669858
- config_name: group_026
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_026
num_bytes: 3670217
num_examples: 1000
download_size: 1832165
dataset_size: 3670217
- config_name: group_027
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_027
num_bytes: 3753424
num_examples: 1000
download_size: 1843680
dataset_size: 3753424
- config_name: group_028
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_028
num_bytes: 3719150
num_examples: 1000
download_size: 1841035
dataset_size: 3719150
- config_name: group_029
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_029
num_bytes: 3659327
num_examples: 1000
download_size: 1804868
dataset_size: 3659327
- config_name: group_030
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_030
num_bytes: 3669635
num_examples: 1000
download_size: 1819675
dataset_size: 3669635
- config_name: group_031
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_031
num_bytes: 3644414
num_examples: 1000
download_size: 1807146
dataset_size: 3644414
- config_name: group_032
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_032
num_bytes: 3691035
num_examples: 1000
download_size: 1824335
dataset_size: 3691035
- config_name: group_033
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_033
num_bytes: 3698003
num_examples: 1000
download_size: 1836742
dataset_size: 3698003
- config_name: group_034
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_034
num_bytes: 3758972
num_examples: 1000
download_size: 1852252
dataset_size: 3758972
- config_name: group_035
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_035
num_bytes: 3752328
num_examples: 1000
download_size: 1860840
dataset_size: 3752328
- config_name: group_036
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_036
num_bytes: 3640162
num_examples: 1000
download_size: 1808805
dataset_size: 3640162
- config_name: group_037
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_037
num_bytes: 3697595
num_examples: 1000
download_size: 1831176
dataset_size: 3697595
- config_name: group_038
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_038
num_bytes: 3696030
num_examples: 1000
download_size: 1827972
dataset_size: 3696030
- config_name: group_039
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_039
num_bytes: 3650814
num_examples: 1000
download_size: 1804046
dataset_size: 3650814
- config_name: group_040
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_040
num_bytes: 3622130
num_examples: 1000
download_size: 1779191
dataset_size: 3622130
- config_name: group_041
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_041
num_bytes: 3703674
num_examples: 1000
download_size: 1824803
dataset_size: 3703674
- config_name: group_042
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_042
num_bytes: 3717098
num_examples: 1000
download_size: 1830840
dataset_size: 3717098
- config_name: group_043
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_043
num_bytes: 3617590
num_examples: 1000
download_size: 1797695
dataset_size: 3617590
- config_name: group_044
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_044
num_bytes: 3623383
num_examples: 1000
download_size: 1788077
dataset_size: 3623383
- config_name: group_045
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_045
num_bytes: 3662910
num_examples: 1000
download_size: 1797931
dataset_size: 3662910
- config_name: group_046
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_046
num_bytes: 3699012
num_examples: 1000
download_size: 1823680
dataset_size: 3699012
- config_name: group_047
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_047
num_bytes: 3635450
num_examples: 1000
download_size: 1808927
dataset_size: 3635450
- config_name: group_048
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_048
num_bytes: 3688456
num_examples: 1000
download_size: 1819557
dataset_size: 3688456
- config_name: group_049
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_049
num_bytes: 3682204
num_examples: 1000
download_size: 1831621
dataset_size: 3682204
- config_name: group_050
features:
- name: kri_question
dtype: string
- name: kri_reference_answer
dtype: string
- name: kri_responses
list:
- name: response
dtype: string
- name: response_model
dtype: string
- name: kri_category
dtype: string
splits:
- name: group_050
num_bytes: 3628223
num_examples: 1000
download_size: 1797115
dataset_size: 3628223
configs:
- config_name: group_001
data_files:
- split: group_001
path: group_001/group_001-*
- config_name: group_002
data_files:
- split: group_002
path: group_002/group_002-*
- config_name: group_003
data_files:
- split: group_003
path: group_003/group_003-*
- config_name: group_004
data_files:
- split: group_004
path: group_004/group_004-*
- config_name: group_005
data_files:
- split: group_005
path: group_005/group_005-*
- config_name: group_006
data_files:
- split: group_006
path: group_006/group_006-*
- config_name: group_007
data_files:
- split: group_007
path: group_007/group_007-*
- config_name: group_008
data_files:
- split: group_008
path: group_008/group_008-*
- config_name: group_009
data_files:
- split: group_009
path: group_009/group_009-*
- config_name: group_010
data_files:
- split: group_010
path: group_010/group_010-*
- config_name: group_011
data_files:
- split: group_011
path: group_011/group_011-*
- config_name: group_012
data_files:
- split: group_012
path: group_012/group_012-*
- config_name: group_013
data_files:
- split: group_013
path: group_013/group_013-*
- config_name: group_014
data_files:
- split: group_014
path: group_014/group_014-*
- config_name: group_015
data_files:
- split: group_015
path: group_015/group_015-*
- config_name: group_016
data_files:
- split: group_016
path: group_016/group_016-*
- config_name: group_017
data_files:
- split: group_017
path: group_017/group_017-*
- config_name: group_018
data_files:
- split: group_018
path: group_018/group_018-*
- config_name: group_019
data_files:
- split: group_019
path: group_019/group_019-*
- config_name: group_020
data_files:
- split: group_020
path: group_020/group_020-*
- config_name: group_021
data_files:
- split: group_021
path: group_021/group_021-*
- config_name: group_022
data_files:
- split: group_022
path: group_022/group_022-*
- config_name: group_023
data_files:
- split: group_023
path: group_023/group_023-*
- config_name: group_024
data_files:
- split: group_024
path: group_024/group_024-*
- config_name: group_025
data_files:
- split: group_025
path: group_025/group_025-*
- config_name: group_026
data_files:
- split: group_026
path: group_026/group_026-*
- config_name: group_027
data_files:
- split: group_027
path: group_027/group_027-*
- config_name: group_028
data_files:
- split: group_028
path: group_028/group_028-*
- config_name: group_029
data_files:
- split: group_029
path: group_029/group_029-*
- config_name: group_030
data_files:
- split: group_030
path: group_030/group_030-*
- config_name: group_031
data_files:
- split: group_031
path: group_031/group_031-*
- config_name: group_032
data_files:
- split: group_032
path: group_032/group_032-*
- config_name: group_033
data_files:
- split: group_033
path: group_033/group_033-*
- config_name: group_034
data_files:
- split: group_034
path: group_034/group_034-*
- config_name: group_035
data_files:
- split: group_035
path: group_035/group_035-*
- config_name: group_036
data_files:
- split: group_036
path: group_036/group_036-*
- config_name: group_037
data_files:
- split: group_037
path: group_037/group_037-*
- config_name: group_038
data_files:
- split: group_038
path: group_038/group_038-*
- config_name: group_039
data_files:
- split: group_039
path: group_039/group_039-*
- config_name: group_040
data_files:
- split: group_040
path: group_040/group_040-*
- config_name: group_041
data_files:
- split: group_041
path: group_041/group_041-*
- config_name: group_042
data_files:
- split: group_042
path: group_042/group_042-*
- config_name: group_043
data_files:
- split: group_043
path: group_043/group_043-*
- config_name: group_044
data_files:
- split: group_044
path: group_044/group_044-*
- config_name: group_045
data_files:
- split: group_045
path: group_045/group_045-*
- config_name: group_046
data_files:
- split: group_046
path: group_046/group_046-*
- config_name: group_047
data_files:
- split: group_047
path: group_047/group_047-*
- config_name: group_048
data_files:
- split: group_048
path: group_048/group_048-*
- config_name: group_049
data_files:
- split: group_049
path: group_049/group_049-*
- config_name: group_050
data_files:
- split: group_050
path: group_050/group_050-*
---
|
R2E-Gym/R2EGym-Verifier-Trajectories
|
R2E-Gym
|
2025-02-25T10:56:06Z
| 76 | 2 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-25T09:40:31Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: docker_images
dtype: string
- name: rewards
dtype: float64
splits:
- name: train
num_bytes: 331516111
num_examples: 5750
download_size: 107490761
dataset_size: 331516111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tmpmodelsave/llama3_sft_math_dpo_type12_8ktype4_2ktype3_new_250tmp10
|
tmpmodelsave
|
2025-01-13T04:11:24Z
| 64 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-13T04:11:23Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 22483677
num_examples: 5000
download_size: 7230504
dataset_size: 22483677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UniversalCEFR/elg_cefr_en
|
UniversalCEFR
|
2025-05-26T14:43:59Z
| 0 | 0 |
[
"language:en",
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-26T14:33:58Z
| 0 |
---
license: cc-by-nc-4.0
language:
- en
---
This dataset has been indexed in the UniversalCEFR. The transformed version (in JSON format) retains the same license as the original dataset. Ownership and copyright remain with the original creators and/or dataset paper authors. If you use this transformed dataset, you must cite the following:
Dataset License: cc-by-nc-4.0
Dataset Repository: https://www.edia.nl/resources/elg/downloads
Original Dataset Paper: Breuker, M. (2023). CEFR Labelling and Assessment Services. In: Rehm, G. (eds) European Language Grid. Cognitive Technologies. Springer, Cham. https://doi.org/10.1007/978-3-031-17258-8_16
|
strombergnlp/polstance
|
strombergnlp
|
2022-10-25T21:42:18Z
| 23 | 1 |
[
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"size_categories:n<1K",
"region:us",
"stance-detection"
] |
[
"text-classification"
] |
2022-04-28T10:08:13Z
| 0 |
---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-analysis
paperswithcode_id: polstance
pretty_name: Political Stance for Danish
tags:
- stance-detection
---
# Dataset Card for "polstance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/politicalstanceindanish/](https://stromberg.ai/publication/politicalstanceindanish/)
- **Repository:** [https://github.com/StrombergNLP/Political-Stance-in-Danish/](https://github.com/StrombergNLP/Political-Stance-in-Danish/)
- **Paper:** [https://aclanthology.org/W19-6121/](https://aclanthology.org/W19-6121/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 548 KB
- **Size of the generated dataset:** 222 KB
- **Total amount of disk used:** 770 KB
### Dataset Summary
Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article.
### Supported Tasks and Leaderboards
*
### Languages
Danish, bcp47: `da-DK`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'topic': 'integration',
'quote': 'Der kunne jeg godt tænke mig, at der stod mere eksplicit, at de (landene, red.) skal bekæmpe menneskesmuglere og tage imod deres egne borgere',
'label': 2,
'quoteID': '516',
'party': 'Det Konservative Folkeparti',
'politician': 'Naser Khader',
}
```
### Data Fields
- `id`: a `string` feature.
- `topic`: a `string` expressing a topic.
- `quote`: a `string` to be classified for its stance to the topic.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "against",
1: "neutral",
2: "for",
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |train|
|---------|----:|
|polstance|900 sentences|
## Dataset Creation
### Curation Rationale
Collection of quotes from politicians to allow detecting how political quotes orient to issues.
### Source Data
#### Initial Data Collection and Normalization
The data is taken from proceedings of the Danish parliament, the Folketing - [ft.dk](https://ft.dk).
#### Who are the source language producers?
Danish polticians
### Annotations
#### Annotation process
Annotators labelled comments for being against, neutral, or for a specified topic
#### Who are the annotators?
Danish native speakers, 20s, male, studying Software Design.
### Personal and Sensitive Information
The data was public at the time of collection and will remain open public record by law in Denmark.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
Nexdata/204-Hours-English-Philippine-Spontaneous-Dialogue-Smartphone-speech-dataset
|
Nexdata
|
2025-05-09T03:04:09Z
| 2 | 0 |
[
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-05-08T08:07:49Z
| 0 |
---
license: cc-by-nc-4.0
---
# 204-Hours-English-Philippine-Spontaneous-Dialogue-Smartphone-speech-dataset
## Description
English(Philippine) Spontaneous Dialogue Smartphone speech dataset, collected from dialogues based on given topics. Transcribed with text content, timestamp, speaker's ID, gender and other attributes. Our dataset was collected from extensive and diversify speakers(around 400 native speakers), geographicly speaking, enhancing model performance in real and complex tasks. Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
For more details, please refer to the link: https://www.nexdata.ai/datasets/speechrecog/1397?source=huggingface
## Specifications
### Format
16kHz, 16bit, uncompressed wav, mono channel
### Content category
Dialogue based on given topics
### Recording condition
Low background noise (indoor)
### Recording device
Android smartphone, iPhone
### Country
Philippine(PHL)
### Language(Region) Code
en-PH
### Language
English
### Speaker
304 native speakers in total, 42% male and 58% female
### Features of annotation
Transcription text, timestamp, speaker ID, gender, noise
### Accuracy rate
Sentence accuracy rate(SAR) 95%
## Licensing Information
Commercial License
|
BioMCQA/test_mcq
|
BioMCQA
|
2025-05-14T15:28:25Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-14T15:28:23Z
| 0 |
---
dataset_info:
features:
- name: question_key
dtype: string
- name: relation
dtype: string
- name: question_template
dtype: string
- name: subject_cui
dtype: string
- name: subject_label
dtype: string
- name: correct_object_cui
dtype: string
- name: correct_object_label
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype: int64
splits:
- name: train
num_bytes: 224187
num_examples: 610
download_size: 126143
dataset_size: 224187
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rikeshsilwalekg/call-conversation-llm-tokenized-Llama-3.2-1B-Instruct-maxseq-2048-dynamic-json-response
|
rikeshsilwalekg
|
2025-03-06T07:01:15Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-06T07:01:09Z
| 0 |
---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 8557080
num_examples: 835
download_size: 262422
dataset_size: 8557080
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yananchen/math_shots_16
|
yananchen
|
2024-10-22T20:31:26Z
| 20 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-22T18:54:34Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 101027045
num_examples: 7500
- name: test
num_bytes: 67075060
num_examples: 5000
download_size: 254283145
dataset_size: 168102105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
HachiML/mgsm_250-QwQ-CoT-0.5B-JA-v1.1-MCTS-ips12-mi15-mss32-et0-sa
|
HachiML
|
2024-12-19T00:51:35Z
| 19 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-19T00:51:32Z
| 0 |
---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: response
dtype: string
- name: pred
dtype: string
- name: correct
dtype: bool
splits:
- name: test
num_bytes: 293672
num_examples: 250
download_size: 124744
dataset_size: 293672
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
alea-institute/kl3m-data-dotgov-www.ustaxcourt.gov
|
alea-institute
|
2025-04-11T01:46:25Z
| 44 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07854",
"arxiv:2503.17247",
"region:us"
] |
[] |
2025-02-02T12:15:37Z
| 0 |
---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 36488148
num_examples: 1321
download_size: 7204925
dataset_size: 36488148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# KL3M Data Project
> **Note**: This page provides general information about the KL3M Data Project. Additional details specific to this dataset will be added in future updates. For complete information, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854).
## Description
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
## Dataset Details
- **Format**: Parquet files containing document text and metadata
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
## Abstract
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified to meet the strict copyright and licensing protocol detailed in the project. We are releasing the entire pipeline, including:
1. The source code to acquire and process these documents
2. The original document formats with associated provenance and metadata
3. Extracted content in a standardized format
4. Pre-tokenized representations of the documents
5. Various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification, prediction, and conversational data
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
## Legal Basis
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
- Public domain materials
- US government works
- Open access content under permissive licenses
- Content explicitly licensed for AI training
## Papers
For more information about the KL3M Data Project, please refer to:
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
- [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{bommarito2025kl3m,
title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2503.17247},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/).
|
sairamtelagamsetti/setadataset
|
sairamtelagamsetti
|
2024-12-28T03:47:50Z
| 17 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-28T03:46:24Z
| 0 |
---
license: apache-2.0
---
|
WPRM/auto_regressive_rm_sh
|
WPRM
|
2025-04-16T18:07:11Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-16T18:06:59Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 415353069
num_examples: 40433
download_size: 63876217
dataset_size: 415353069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jerry128/Musique-Ans-Eval
|
jerry128
|
2025-03-03T07:31:52Z
| 38 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-03T07:30:14Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: paragraphs
list:
- name: idx
dtype: int64
- name: is_supporting
dtype: bool
- name: paragraph_text
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_aliases
sequence: string
- name: context
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 52650753
num_examples: 2417
download_size: 22612267
dataset_size: 52650753
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
weqweasdas/llama3_non_delete_rr40k_2e6_bz32_ep3tmp10_temp_exp_genbytmp
|
weqweasdas
|
2025-01-06T04:18:36Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-06T04:18:35Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: prompt
dtype: string
- name: answers
sequence: string
- name: gt
dtype: string
- name: rewards
sequence: bool
- name: proxy_label
dtype: bool
- name: my_prompt
dtype: string
- name: proxy_label_1st_round
dtype: bool
splits:
- name: train
num_bytes: 16070981
num_examples: 5000
download_size: 5785968
dataset_size: 16070981
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tmpmodelsave/acescoreppo_tmp0_10
|
tmpmodelsave
|
2025-02-05T10:26:41Z
| 13 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-05T09:28:09Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: prompt
dtype: string
- name: answers
sequence: string
- name: gt
dtype: string
splits:
- name: train
num_bytes: 1120810
num_examples: 496
download_size: 430699
dataset_size: 1120810
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shuhaib73/ecommerce-fqa-dataset
|
Shuhaib73
|
2025-01-16T03:59:44Z
| 19 | 0 |
[
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-14T06:38:28Z
| 0 |
---
language:
- en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 38068
num_examples: 158
download_size: 9274
dataset_size: 38068
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DIaac/medical_r1_distil_data_sampled_1000
|
DIaac
|
2025-05-10T12:10:36Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-10T12:10:14Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: reasoning
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23256320.59090909
num_examples: 1000
download_size: 10941577
dataset_size: 23256320.59090909
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nlee-208/persona_zero-janus-dpo-7b
|
nlee-208
|
2024-12-13T04:34:20Z
| 24 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-13T04:34:17Z
| 0 |
---
dataset_info:
features:
- name: persona
dtype: string
- name: prompt
dtype: string
- name: description
dtype: string
- name: generated
dtype: string
splits:
- name: train
num_bytes: 1612198
num_examples: 500
download_size: 889811
dataset_size: 1612198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jainamit/koch_test_twoarms4
|
jainamit
|
2025-03-31T18:21:40Z
| 23 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] |
[
"robotics"
] |
2025-03-31T18:21:37Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "koch_bimanual",
"total_episodes": 3,
"total_frames": 697,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 10.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 10.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
sunnyday910/Chatml-openai-gsm8k-dataset
|
sunnyday910
|
2025-02-25T07:00:18Z
| 18 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-25T07:00:16Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: messages
dtype: string
splits:
- name: train
num_bytes: 8516771
num_examples: 7473
- name: test
num_bytes: 1531665
num_examples: 1319
download_size: 5337663
dataset_size: 10048436
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
GEM/turku_hockey_data2text
|
GEM
|
2022-10-24T15:30:33Z
| 149 | 0 |
[
"task_categories:table-to-text",
"annotations_creators:expert-created",
"language_creators:unknown",
"multilinguality:unknown",
"source_datasets:original",
"language:fi",
"license:cc-by-nc-sa-4.0",
"region:us",
"data-to-text"
] |
[
"table-to-text"
] |
2022-03-02T23:29:22Z
| 0 |
---
annotations_creators:
- expert-created
language_creators:
- unknown
language:
- fi
license:
- cc-by-nc-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: turku_hockey_data2text
tags:
- data-to-text
---
# Dataset Card for GEM/turku_hockey_data2text
## Dataset Description
- **Homepage:** https://turkunlp.org/hockey_data2text.html
- **Repository:** https://github.com/TurkuNLP/Turku-hockey-data2text
- **Paper:** https://aclanthology.org/W19-6125/
- **Leaderboard:** N/A
- **Point of Contact:** Jenna Kanerva, Filip Ginter
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/turku_hockey_data2text).
### Dataset Summary
This is a Finnish data-to-text dataset in which the input is structured information about a hockey game and the output a description of the game.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/turku_hockey_data2text')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/turku_hockey_data2text).
#### website
[Website](https://turkunlp.org/hockey_data2text.html)
#### paper
[ACL anthology](https://aclanthology.org/W19-6125/)
#### authors
Jenna Kanerva, Samuel Rönnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Website](https://turkunlp.org/hockey_data2text.html)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/TurkuNLP/Turku-hockey-data2text)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL anthology](https://aclanthology.org/W19-6125/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{kanerva2019newsgen,
Title = {Template-free Data-to-Text Generation of Finnish Sports News},
Author = {Jenna Kanerva and Samuel R{\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},
booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa’19)},
year={2019}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Jenna Kanerva, Filip Ginter
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected], [email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
written standard language
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`Finnish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The original news articles are written by professional journalists. The text passages extracted in the annotation may be slightly edited compared to the original language during the corpus annotation.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
This dataset was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Describe an event from an ice hockey game based on the given structural data.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
University of Turku
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Jenna Kanerva, Samuel Rönnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
The project was supported by the Google Digital News Innovation Fund.
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
The dataset is constructed of games, where each game is a list of events. If the event was annotated (corresponding sentence was found from the news article), it includes `text` field with value other than empty string ("").
For each game (dict), there are keys `gem_id` (string), `id` (string), `news_article` (string), and `events` (list).
For each event (dict), there are different, relevant keys available with non empty values depending on the event type (e.g. goal or penalty). The mandatory keys for each event are `event_id` (string), `event_type` (string), `text` (string, empty string if not annotated), and `multi_reference` (bool). The keys not relevant for the specific event type are left empty.
The relevant keys in the event dictionary are:
For each event type, the following keys are relevant:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event, possible values are `game result`, `goal`, `penalty`, or `saves` (string)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
The rest of the fields are specific to the event type. The relevant fields for each event type are:
game result:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`home_team`: Name of the home team (string)
`guest_team`: Name of the guest team (string)
`score`: Final score of the game, in the form of home–guest (string)
`periods`: Scores for individual periods, each in the form of home–guest score in that period (list of strings)
`features`: Additional features, such as overtime win or shoot out (list of strings)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
goal:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`player`: Name of the player scoring (string)
`assist`: Names of the players assisting, at most two players (list of strings)
`team`: Team scoring with possible values of `home` or `guest` (string)
`team_name`: Name of the team scoring (string)
`score`: Score after the goal, in the form of home–guest (string)
`time`: Time of the goal, minutes and seconds from the beginning (string)
`features`: Additional features, such as power play or short-handed goal (list of strings)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
penalty:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`player`: Name of the player getting the penalty (string)
`team`: Team getting the penalty with possible values of `home` or `guest` (string)
`team_name`: Name of the team getting the penalty (string)
`penalty_minutes`: Penalty minutes (string)
`time`: Time of the penalty, minutes and seconds from the beginning (string)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
saves:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`player`: Name of the goalkeeper (string)
`team`: Team of the goalkeeper with possible values of `home` or `guest` (string)
`team_name`: Name of the team (string)
`saves`: Number of saves in the game (string)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
Text passages describing multiple events (multi_reference):
Some text passages refer to multiple events in such way that separating them to individual statements is not adequate (e.g. "The home team received two penalties towards the end of the first period."). In these cases, multiple events are aligned to the same text passage so that the first event (in chronological order) include the annotated text passage, while the rest of the events referring to the same text passage include the identifier of the first event in the annotated text field (e.g. `text`: "E4").
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
'gem_id': 'gem-turku_hockey_data2text-train-0',
'id': '20061031-TPS-HPK',
'news_article': 'HPK:n hyvä syysvire jatkuu jääkiekon SM-liigassa. Tiistaina HPK kukisti mainiolla liikkeellä ja tehokkaalla ylivoimapelillä TPS:n vieraissa 1–0 (1–0, 0–0, 0–0).\nHPK hyödynsi ylivoimaa mennen jo ensimmäisessä erässä Mikko Mäenpään maalilla 1–0 -johtoon.\nToisessa ja kolmannessa erässä HPK tarjosi edelleen TPS:lle runsaasti tilanteita, mutta maalia eivät turkulaiset millään ilveellä saaneet. Pahin este oli loistavan pelin Hämeenlinnan maalilla pelannut Mika Oksa.\nTPS:n maalissa Jani Hurme ei osumille mitään mahtanut. Joukkueen suuri yksinäinen kenttäpelaaja oli Kai Nurminen, mutta hänelläkään ei ollut onnea maalitilanteissa.',
'events':
{
'event_id': ['E1', 'E2', 'E3'],
'event_type': ['game result', 'penalty', 'goal'],
'text': ['HPK kukisti TPS:n vieraissa 1–0 (1–0, 0–0, 0–0).', '', 'HPK hyödynsi ylivoimaa mennen jo ensimmäisessä erässä Mikko Mäenpään maalilla 1–0 -johtoon.'],
'home_team': ['TPS', '', ''],
'guest_team': ['HPK', '', ''],
'score': ['0–1', '', '0–1'],
'periods': [['0–1', '0–0', '0–0'], [], []],
'features': [[], [], ['power play']],
'player': ['', 'Fredrik Svensson', 'Mikko Mäenpää'],
'assist': [[], [], ['Jani Keinänen', 'Toni Mäkiaho']],
'team': ['', 'guest', 'guest'],
'team_name': ['', 'HPK', 'HPK'],
'time': ['', '9.28', '14.57'],
'penalty_minutes': ['', '2', ''],
'saves': ['', '', ''],
'multi_reference': [false, false, false]
}
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The corpus include 3 splits: train, validation, and test.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The dataset was created to develop machine learned text generation models for Finnish ice hockey news, where the generation would reflect the natural language variation found from the game reports written by professional journalists. While the original game reports often include additional information not derivable from the game statistics, the corpus was fully manually curated to remove all such information from the natural language descriptions. The rationale of such curation was to prevent model 'hallucinating' additional facts.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
This is the only data2text corpus for Finnish in GEM.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
morphological inflection, language variation
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points modified`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Structural data was translated into English.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `METEOR`, `ROUGE`, `WER`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Automatic evaluation: BLEU, NIST, METEOR, ROUGE-L, CIDEr
Manual evaluation: factual mistakes, grammatical errors, minimum edit distance to an acceptable game report (using WER)
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset is designed for text generation (data2text), where the original source of natural language descriptions is news articles written by journalists. While the link between structural data (ice hockey game statistics) and the news articles describing the game was quite weak (news articles including a lot of information not derivable from the statistics, while leaving many events unmentioned), the corpus includes full manual annotation aligning the events extracted from game statistics and the corresponding natural language passages extracted from the news articles.
Each event is manually aligned into a sentence-like passage, and in case a suitable passage was not found, the annotation is left empty (with value `None`). The extracted passages were manually modified not to include additional information not derivable from the game statistics, or not considered as world knowledge. The manual curation of passages is designed to prevent model hallucination, i.e. model learning to generate facts not derivable from the input data.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Describing the given events (structural data) in natural language, and therefore generating ice hockey game reports.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Other`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The initial data, both game statistics and news articles, were obtained from the Finnish News Agency STT news archives released for academic use (http://urn.fi/urn:nbn:fi:lb-2019041501). The original news articles are written by professional journalists.
We (TurkuNLP) gratefully acknowledge the collaboration of Maija Paikkala, Salla Salmela and Pihla Lehmusjoki from the Finnish News Agency STT while creating the corpus.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Ice hockey, news
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
Include only games, where both game statistics and a news article describing the game were available (based on timestamps and team names).
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
1
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
Members of the TurkuNLP research group, native speakers of Finnish.
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
1
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
1
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
Manual alignment of events and their natural language descriptions. Removing information not derivable from the input data or world knowledge in order to prevent the model 'hallucination'.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Manual inspection of examples during the initial annotation training phrase.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
The corpus license was agreed with the providers of the source material.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
yes/very likely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset represents only written standard language.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
None
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
VGraf/context_switch_alpacaeval_7maxLeadingTurns
|
VGraf
|
2025-05-09T23:02:28Z
| 7 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-09T23:02:26Z
| 0 |
---
dataset_info:
features:
- name: dataset
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 12202219
num_examples: 805
download_size: 5989049
dataset_size: 12202219
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
google/wmt24pp
|
google
|
2025-03-13T21:53:34Z
| 2,151 | 38 |
[
"task_categories:translation",
"language:ar",
"language:bg",
"language:bn",
"language:ca",
"language:da",
"language:de",
"language:el",
"language:es",
"language:et",
"language:fa",
"language:fi",
"language:fr",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:kn",
"language:ko",
"language:lt",
"language:lv",
"language:ml",
"language:mr",
"language:nl",
"language:no",
"language:pa",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.12404",
"region:us"
] |
[
"translation"
] |
2025-02-06T15:19:53Z
| 2 |
---
license: apache-2.0
language:
- ar
- bg
- bn
- ca
- da
- de
- el
- es
- et
- fa
- fi
- fr
- gu
- he
- hi
- hr
- hu
- id
- is
- it
- ja
- kn
- ko
- lt
- lv
- ml
- mr
- nl
- 'no'
- pa
- pl
- pt
- ro
- ru
- sk
- sl
- sr
- sv
- sw
- ta
- te
- th
- tr
- uk
- ur
- vi
- zh
- zu
task_categories:
- translation
size_categories:
- 10K<n<100K
configs:
- config_name: en-ar_EG
data_files:
- split: train
path: "en-ar_EG.jsonl"
- config_name: en-ar_SA
data_files:
- split: train
path: "en-ar_SA.jsonl"
- config_name: en-bg_BG
data_files:
- split: train
path: "en-bg_BG.jsonl"
- config_name: en-bn_IN
data_files:
- split: train
path: "en-bn_IN.jsonl"
- config_name: en-ca_ES
data_files:
- split: train
path: "en-ca_ES.jsonl"
- config_name: en-cs_CZ
data_files:
- split: train
path: "en-cs_CZ.jsonl"
- config_name: en-da_DK
data_files:
- split: train
path: "en-da_DK.jsonl"
- config_name: en-de_DE
data_files:
- split: train
path: "en-de_DE.jsonl"
- config_name: en-el_GR
data_files:
- split: train
path: "en-el_GR.jsonl"
- config_name: en-es_MX
data_files:
- split: train
path: "en-es_MX.jsonl"
- config_name: en-et_EE
data_files:
- split: train
path: "en-et_EE.jsonl"
- config_name: en-fa_IR
data_files:
- split: train
path: "en-fa_IR.jsonl"
- config_name: en-fi_FI
data_files:
- split: train
path: "en-fi_FI.jsonl"
- config_name: en-fil_PH
data_files:
- split: train
path: "en-fil_PH.jsonl"
- config_name: en-fr_CA
data_files:
- split: train
path: "en-fr_CA.jsonl"
- config_name: en-fr_FR
data_files:
- split: train
path: "en-fr_FR.jsonl"
- config_name: en-gu_IN
data_files:
- split: train
path: "en-gu_IN.jsonl"
- config_name: en-he_IL
data_files:
- split: train
path: "en-he_IL.jsonl"
- config_name: en-hi_IN
data_files:
- split: train
path: "en-hi_IN.jsonl"
- config_name: en-hr_HR
data_files:
- split: train
path: "en-hr_HR.jsonl"
- config_name: en-hu_HU
data_files:
- split: train
path: "en-hu_HU.jsonl"
- config_name: en-id_ID
data_files:
- split: train
path: "en-id_ID.jsonl"
- config_name: en-is_IS
data_files:
- split: train
path: "en-is_IS.jsonl"
- config_name: en-it_IT
data_files:
- split: train
path: "en-it_IT.jsonl"
- config_name: en-ja_JP
data_files:
- split: train
path: "en-ja_JP.jsonl"
- config_name: en-kn_IN
data_files:
- split: train
path: "en-kn_IN.jsonl"
- config_name: en-ko_KR
data_files:
- split: train
path: "en-ko_KR.jsonl"
- config_name: en-lt_LT
data_files:
- split: train
path: "en-lt_LT.jsonl"
- config_name: en-lv_LV
data_files:
- split: train
path: "en-lv_LV.jsonl"
- config_name: en-ml_IN
data_files:
- split: train
path: "en-ml_IN.jsonl"
- config_name: en-mr_IN
data_files:
- split: train
path: "en-mr_IN.jsonl"
- config_name: en-nl_NL
data_files:
- split: train
path: "en-nl_NL.jsonl"
- config_name: en-no_NO
data_files:
- split: train
path: "en-no_NO.jsonl"
- config_name: en-pa_IN
data_files:
- split: train
path: "en-pa_IN.jsonl"
- config_name: en-pl_PL
data_files:
- split: train
path: "en-pl_PL.jsonl"
- config_name: en-pt_BR
data_files:
- split: train
path: "en-pt_BR.jsonl"
- config_name: en-pt_PT
data_files:
- split: train
path: "en-pt_PT.jsonl"
- config_name: en-ro_RO
data_files:
- split: train
path: "en-ro_RO.jsonl"
- config_name: en-ru_RU
data_files:
- split: train
path: "en-ru_RU.jsonl"
- config_name: en-sk_SK
data_files:
- split: train
path: "en-sk_SK.jsonl"
- config_name: en-sl_SI
data_files:
- split: train
path: "en-sl_SI.jsonl"
- config_name: en-sr_RS
data_files:
- split: train
path: "en-sr_RS.jsonl"
- config_name: en-sv_SE
data_files:
- split: train
path: "en-sv_SE.jsonl"
- config_name: en-sw_KE
data_files:
- split: train
path: "en-sw_KE.jsonl"
- config_name: en-sw_TZ
data_files:
- split: train
path: "en-sw_TZ.jsonl"
- config_name: en-ta_IN
data_files:
- split: train
path: "en-ta_IN.jsonl"
- config_name: en-te_IN
data_files:
- split: train
path: "en-te_IN.jsonl"
- config_name: en-th_TH
data_files:
- split: train
path: "en-th_TH.jsonl"
- config_name: en-tr_TR
data_files:
- split: train
path: "en-tr_TR.jsonl"
- config_name: en-uk_UA
data_files:
- split: train
path: "en-uk_UA.jsonl"
- config_name: en-ur_PK
data_files:
- split: train
path: "en-ur_PK.jsonl"
- config_name: en-vi_VN
data_files:
- split: train
path: "en-vi_VN.jsonl"
- config_name: en-zh_CN
data_files:
- split: train
path: "en-zh_CN.jsonl"
- config_name: en-zh_TW
data_files:
- split: train
path: "en-zh_TW.jsonl"
- config_name: en-zu_ZA
data_files:
- split: train
path: "en-zu_ZA.jsonl"
---
# WMT24++
This repository contains the human translation and post-edit data for the 55 en->xx language pairs released in
the publication
[WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects](https://arxiv.org/abs/2502.12404).
If you are interested in the MT/LLM system outputs and automatic metric scores, please see [MTME](https://github.com/google-research/mt-metrics-eval/tree/main?tab=readme-ov-file#wmt24-data).
If you are interested in the images of the source URLs for each document, please see [here](https://huggingface.co/datasets/google/wmt24pp-images).
## Schema
Each language pair is stored in its own jsonl file.
Each row is a serialized JSON object with the following fields:
- `lp`: The language pair (e.g., `"en-de_DE"`).
- `domain`: The domain of the source, either `"canary"`, `"news"`, `"social"`, `"speech"`, or `"literary"`.
- `document_id`: The unique ID that identifies the document the source came from.
- `segment_id`: The globally unique ID that identifies the segment.
- `is_bad_source`: A Boolean that indicates whether this source is low quality (e.g., HTML, URLs, emoijs). In the paper, the segments marked as true were removed from the evaluation, and we recommend doing the same.
- `source`: The English source text.
- `target`: The post-edit of `original_target`. We recommend using the post-edit as the default reference.
- `original_target`: The original reference translation.
## Citation
If you use any of the data released in our work, please cite the following paper:
```
@misc{deutsch2025wmt24expandinglanguagecoverage,
title={{WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects}},
author={Daniel Deutsch and Eleftheria Briakou and Isaac Caswell and Mara Finkelstein and Rebecca Galor and Juraj Juraska and Geza Kovacs and Alison Lui and Ricardo Rei and Jason Riesa and Shruti Rijhwani and Parker Riley and Elizabeth Salesky and Firas Trabelsi and Stephanie Winkler and Biao Zhang and Markus Freitag},
year={2025},
eprint={2502.12404},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.12404},
}
```
## Helpful Python Constants
```python
LANGUAGE_PAIRS = (
"en-ar_EG", "en-ar_SA", "en-bg_BG", "en-bn_IN", "en-ca_ES", "en-cs_CZ", "en-da_DK", "en-de_DE",
"en-el_GR", "en-es_MX", "en-et_EE", "en-fa_IR", "en-fi_FI", "en-fil_PH", "en-fr_CA", "en-fr_FR",
"en-gu_IN", "en-he_IL", "en-hi_IN", "en-hr_HR", "en-hu_HU", "en-id_ID", "en-is_IS", "en-it_IT",
"en-ja_JP", "en-kn_IN", "en-ko_KR", "en-lt_LT", "en-lv_LV", "en-ml_IN", "en-mr_IN", "en-nl_NL",
"en-no_NO", "en-pa_IN", "en-pl_PL", "en-pt_BR", "en-pt_PT", "en-ro_RO", "en-ru_RU", "en-sk_SK",
"en-sl_SI", "en-sr_RS", "en-sv_SE", "en-sw_KE", "en-sw_TZ", "en-ta_IN", "en-te_IN", "en-th_TH",
"en-tr_TR", "en-uk_UA", "en-ur_PK", "en-vi_VN", "en-zh_CN", "en-zh_TW", "en-zu_ZA",
)
LANGUAGE_BY_CODE = {
"ar_EG": "Arabic",
"ar_SA": "Arabic",
"bg_BG": "Bulgarian",
"bn_BD": "Bengali",
"bn_IN": "Bengali",
"ca_ES": "Catalan",
"cs_CZ": "Czech",
"da_DK": "Danish",
"de_DE": "German",
"el_GR": "Greek",
"es_MX": "Spanish",
"et_EE": "Estonian",
"fa_IR": "Farsi",
"fi_FI": "Finnish",
"fil_PH": "Filipino",
"fr_CA": "French",
"fr_FR": "French",
"gu_IN": "Gujarati",
"he_IL": "Hebrew",
"hi_IN": "Hindi",
"hr_HR": "Croatian",
"hu_HU": "Hungarian",
"id_ID": "Indonesian",
"is_IS": "Icelandic",
"it_IT": "Italian",
"ja_JP": "Japanese",
"kn_IN": "Kannada",
"ko_KR": "Korean",
"lt_LT": "Lithuanian",
"lv_LV": "Latvian",
"ml_IN": "Malayalam",
"mr_IN": "Marathi",
"nl_NL": "Dutch",
"no_NO": "Norwegian",
"pa_IN": "Punjabi",
"pl_PL": "Polish",
"pt_BR": "Portuguese",
"pt_PT": "Portuguese",
"ro_RO": "Romanian",
"ru_RU": "Russian",
"sk_SK": "Slovak",
"sl_SI": "Slovenian",
"sr_RS": "Serbian",
"sv_SE": "Swedish",
"sw_KE": "Swahili",
"sw_TZ": "Swahili",
"ta_IN": "Tamil",
"te_IN": "Telugu",
"th_TH": "Thai",
"tr_TR": "Turkish",
"uk_UA": "Ukrainian",
"ur_PK": "Urdu",
"vi_VN": "Vietnamese",
"zh_CN": "Mandarin",
"zh_TW": "Mandarin",
"zu_ZA": "Zulu",
}
REGION_BY_CODE = {
"ar_EG": "Egypt",
"ar_SA": "Saudi Arabia",
"bg_BG": "Bulgaria",
"bn_BD": "Bangladesh",
"bn_IN": "India",
"ca_ES": "Spain",
"cs_CZ": "Czechia",
"da_DK": "Denmark",
"de_DE": "Germany",
"el_GR": "Greece",
"es_MX": "Mexico",
"et_EE": "Estonia",
"fa_IR": "Iran",
"fi_FI": "Finland",
"fil_PH": "Philippines",
"fr_CA": "Canada",
"fr_FR": "France",
"gu_IN": "India",
"he_IL": "Israel",
"hi_IN": "India",
"hr_HR": "Croatia",
"hu_HU": "Hungary",
"id_ID": "Indonesia",
"is_IS": "Iceland",
"it_IT": "Italy",
"ja_JP": "Japan",
"kn_IN": "India",
"ko_KR": "South Korea",
"lt_LT": "Lithuania",
"lv_LV": "Latvia",
"ml_IN": "India",
"mr_IN": "India",
"nl_NL": "Netherlands",
"no_NO": "Norway",
"pa_IN": "India",
"pl_PL": "Poland",
"pt_BR": "Brazil",
"pt_PT": "Portugal",
"ro_RO": "Romania",
"ru_RU": "Russia",
"sk_SK": "Slovakia",
"sl_SI": "Slovenia",
"sr_RS": "Serbia",
"sv_SE": "Sweden",
"sw_KE": "Kenya",
"sw_TZ": "Tanzania",
"ta_IN": "India",
"te_IN": "India",
"th_TH": "Thailand",
"tr_TR": "Turkey",
"uk_UA": "Ukraine",
"ur_PK": "Pakistan",
"vi_VN": "Vietnam",
"zh_CN": "China",
"zh_TW": "Taiwan",
"zu_ZA": "South Africa",
}
```
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,400