|
--- |
|
dataset_info: |
|
- config_name: arb_Arab |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 4913929 |
|
num_examples: 1000 |
|
download_size: 2381622 |
|
dataset_size: 4913929 |
|
- config_name: ary_Arab |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3086740 |
|
num_examples: 1000 |
|
download_size: 1515329 |
|
dataset_size: 3086740 |
|
- config_name: arz_Arab |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3175887 |
|
num_examples: 1000 |
|
download_size: 1543207 |
|
dataset_size: 3175887 |
|
- config_name: bar_Latn |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 2494628 |
|
num_examples: 1000 |
|
download_size: 1517640 |
|
dataset_size: 2494628 |
|
- config_name: cmn_Hani |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 4075430 |
|
num_examples: 1000 |
|
download_size: 2925797 |
|
dataset_size: 4075430 |
|
- config_name: dan |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3968961 |
|
num_examples: 1000 |
|
download_size: 2315299 |
|
dataset_size: 3968961 |
|
- config_name: dan_Latn |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3978961 |
|
num_examples: 1000 |
|
download_size: 2315349 |
|
dataset_size: 3978961 |
|
- config_name: default |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 73894945 |
|
num_examples: 13000 |
|
download_size: 38830605 |
|
dataset_size: 73894945 |
|
- config_name: fas_Arab |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 5759890 |
|
num_examples: 1000 |
|
download_size: 2662440 |
|
dataset_size: 5759890 |
|
- config_name: gmh_Latn |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 16120134 |
|
num_examples: 1000 |
|
download_size: 9109369 |
|
dataset_size: 16120134 |
|
- config_name: hin_Deva |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 6238691 |
|
num_examples: 1000 |
|
download_size: 2358281 |
|
dataset_size: 6238691 |
|
- config_name: lvs |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 4598981 |
|
num_examples: 1000 |
|
download_size: 2807485 |
|
dataset_size: 4598981 |
|
- config_name: lvs_Latn |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 4608981 |
|
num_examples: 1000 |
|
download_size: 2807535 |
|
dataset_size: 4608981 |
|
- config_name: rus_Cyrl |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 9674640 |
|
num_examples: 1000 |
|
download_size: 4687716 |
|
dataset_size: 9674640 |
|
- config_name: tat_Cyrl |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: educational_value_labels |
|
sequence: string |
|
- name: annotator_ids |
|
sequence: string |
|
- name: problematic_content_label_present |
|
dtype: bool |
|
- name: problematic_content_label_agreement |
|
dtype: float64 |
|
- name: language_names |
|
dtype: string |
|
- name: language_code |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 6697853 |
|
num_examples: 1000 |
|
download_size: 3270919 |
|
dataset_size: 6697853 |
|
configs: |
|
- config_name: arb_Arab |
|
data_files: |
|
- split: train |
|
path: arb_Arab/train-* |
|
- config_name: ary_Arab |
|
data_files: |
|
- split: train |
|
path: ary_Arab/train-* |
|
- config_name: arz_Arab |
|
data_files: |
|
- split: train |
|
path: arz_Arab/train-* |
|
- config_name: bar_Latn |
|
data_files: |
|
- split: train |
|
path: bar_Latn/train-* |
|
- config_name: cmn_Hani |
|
data_files: |
|
- split: train |
|
path: cmn_Hani/train-* |
|
- config_name: dan |
|
data_files: |
|
- split: train |
|
path: dan/train-* |
|
- config_name: dan_Latn |
|
data_files: |
|
- split: train |
|
path: dan_Latn/train-* |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- config_name: fas_Arab |
|
data_files: |
|
- split: train |
|
path: fas_Arab/train-* |
|
- config_name: gmh_Latn |
|
data_files: |
|
- split: train |
|
path: gmh_Latn/train-* |
|
- config_name: hin_Deva |
|
data_files: |
|
- split: train |
|
path: hin_Deva/train-* |
|
- config_name: lvs |
|
data_files: |
|
- split: train |
|
path: lvs/train-* |
|
- config_name: lvs_Latn |
|
data_files: |
|
- split: train |
|
path: lvs_Latn/train-* |
|
- config_name: rus_Cyrl |
|
data_files: |
|
- split: train |
|
path: rus_Cyrl/train-* |
|
- config_name: tat_Cyrl |
|
data_files: |
|
- split: train |
|
path: tat_Cyrl/train-* |
|
tags: |
|
- argilla |
|
- data-is-better-together |
|
task_categories: |
|
- text-classification |
|
- text-classification |
|
- text-classification |
|
language: |
|
- lvs |
|
- fas |
|
- dan |
|
- arz |
|
- ary |
|
- arb |
|
- tat |
|
- rus |
|
- gmh |
|
- bar |
|
- hin |
|
- arb |
|
- cmn |
|
pretty_name: FineWeb-c |
|
--- |
|
# FineWeb-C: Educational content in many languages, labelled by the community |
|
|
|
<center> |
|
<img src="https://huggingface.co/spaces/data-is-better-together/fineweb-communications-pack/resolve/main/fineweb-c-card-header.png" alt="FineWeb 2: A sparkling update with 1000s of languages"> |
|
</center> |
|
|
|
> *Multilingual data is better together!* |
|
|
|
**Note**: This datasets and the dataset card are works in progress. You can help contribute to the dataset [here](https://huggingface.co/spaces/data-is-better-together/fineweb-c) and join the community discussions in [rocket chat](https://huggingface.co/spaces/HuggingFaceFW/discussion)! |
|
|
|
## What is this? |
|
|
|
This is a collaborative, community-driven project that expands upon the [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. Our goal is to create high-quality educational content annotations across hundreds of languages. |
|
|
|
By enhancing web content with these annotations, we aim to improve the development of Large Language Models (LLMs) in all languages, making AI technology more accessible and effective globally. |
|
|
|
The annotations in this dataset will help train AI systems to automatically identify high-quality educational content in more languages and in turn help build better Large Language Models for all languages. |
|
|
|
### What the community is doing: |
|
|
|
- For a given language, look at a page of web content from the [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset in Argilla. |
|
- Rate how educational the content is. |
|
- Flag problematic content i.e. content that is malformed or in the wrong language. |
|
|
|
Once a language reaches 1,000 annotations, the dataset will be included in this dataset! Alongside rating the educational quality of the content, different language communities are discussing other ways to improve the quality of data for their language in our [rocket chat](https://chat.huggingface.co/channel/fineweb-c) discussion channel. |
|
|
|
### What's been done so far? |
|
|
|
So far **318** members of the Hugging Face community have submitted **32,863** annotations. |
|
|
|
The following languages have reached the 1,000 annotation threshold to be included in the dataset. We'll keep updating this dataset as more annotations are added! |
|
|
|
| Language Code | Language Name | Completed Annotations | Annotators | |
|
|--------------|---------------|---------------------|------------| |
|
| arb_Arab | Standard Arabic | 1000 | 10 | |
|
| ary_Arab | Moroccan Arabic | 1000 | 15 | |
|
| arz_Arab | Egyptian Arabic | 1000 | 9 | |
|
| bar_Latn | Bavarian | 1000 | 1 | |
|
| cmn_Hani | Mandarin Chinese | 1000 | 3 | |
|
| dan_Latn | Danish | 1000 | 18 | |
|
| fas_Arab | Persian | 1000 | 3 | |
|
| gmh_Latn | Middle High German | 1000 | 1 | |
|
| hin_Deva | Hindi | 1000 | 3 | |
|
| lvs_Latn | Standard Latvian | 1000 | 5 | |
|
| rus_Cyrl | Russian | 1000 | 4 | |
|
| tat_Cyrl | Tatar | 1000 | 7 | |
|
|
|
|
|
_You can help contribute to the dataset [here](https://huggingface.co/spaces/data-is-better-together/fineweb-c)._ |
|
|
|
Below is an overview of the number of annotations submitted for each language (updated daily). |
|
|
|
<iframe src="https://huggingface.co/datasets/data-is-better-together/fineweb-c-progress/embed/sql-console/dhn8hw-" frameborder="0" width="100%" height="560px"></iframe> |
|
|
|
### Why are we doing this? |
|
|
|
There are many languages in the world where no high quality LLMs exist. Having high quality data is a central part of building high quality LLMs. [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) is a crucial step in improving the availability of high quality data for many languages. We plan to go a step further. |
|
|
|
#### Fineweb-Edu for every language? |
|
|
|
[FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) is a dataset built on the original [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) dataset. The dataset was constructed by developing an educational quality classifier using annotations generated by LLama3-70B-Instruct and using this classifier to retain only the most educational web pages. |
|
|
|
FineWeb-Edu outperforms FineWeb on popular benchmark. Crucially, using this approach reduces the amount of data needed to train a high quality LLM reducing the barrier to building a high quality LLM for many languages. |
|
|
|
We want to make it possible to build FineWeb-Edu datasets for all the worlds languages. To do this we need annotations in order to train an educational quality classifier. |
|
|
|
This in turn will allow us to build the next generation of Large Language Models for many languages. |
|
|
|
#### Why not use LLMs to annotate the data? |
|
|
|
For high resources languages, using an LLM to generate educational quality annotations can be a good solution. However, for many languages LLMs are not able to generate high quality annotations — or we don't have enough data to validate whether the annotations are correct. |
|
|
|
## How can I help? |
|
|
|
You can help by contributing to the dataset [here](https://huggingface.co/spaces/data-is-better-together/fineweb-c) and join the community discussions in [rocket chat](https://chat.huggingface.co/channel/fineweb-c)! |
|
|
|
## Why would I bother to contribute to this dataset? |
|
|
|
Your contributions directly shape the future of AI in your language. Here's why this matters: |
|
|
|
1. Break the AI language barrier: Most commercial AI companies focus on profitable languages, leaving many communities behind. Your work helps bring AI capabilities to more languages. |
|
|
|
2. Keep it open: Unlike proprietary datasets locked away by companies, FineWeb2-C is an open dataset. This means anyone can use it to build AI systems that truly serve their community's needs. Through this open approach we also learn about which approaches work best for different languages. |
|
|
|
3. Be part of something bigger: Just as Wikipedia showed how volunteers can build invaluable resources, the Hugging Face community has created numerous open models and datasets. You're joining a movement to democratize AI technology. |
|
|
|
Every annotation counts. Whether you can contribute ten minutes or ten hours, your input helps build a more inclusive future for AI technology 🤗 |
|
|
|
## Who contributed to this dataset so far? |
|
|
|
These are the top 10 contributors to this release of the dataset. Make sure to give them a follow on the Hub to show your appreciation! |
|
|
|
| Hugging Face Username | Submissions | |
|
|----------|------------| |
|
| [stefan-it](https://huggingface.co/stefan-it) | 2,011 | |
|
| [hasnachouikhi](https://huggingface.co/hasnachouikhi) | 1,865 | |
|
| [catastropiyush](https://huggingface.co/catastropiyush) | 1,053 | |
|
| [vikkormallansohn](https://huggingface.co/vikkormallansohn) | 1,000 | |
|
| [rasgaard](https://huggingface.co/rasgaard) | 1,000 | |
|
| [Maani](https://huggingface.co/Maani) | 985 | |
|
| [paperplanedeemo](https://huggingface.co/paperplanedeemo) | 978 | |
|
| [JakobBlaa](https://huggingface.co/JakobBlaa) | 978 | |
|
| [anhha9](https://huggingface.co/anhha9) | 927 | |
|
| [Aivis](https://huggingface.co/Aivis) | 894 | |
|
|
|
|
|
Data work is the under appreciated foundation of AI and ML. This dataset is built by the community for the community. Below is a leaderboard that is updated daily and shows all the contributors to this annotation effort. |
|
|
|
<iframe src="https://huggingface.co/datasets/data-is-better-together/fineweb-c-progress/embed/sql-console/DJ2n1Z0" frameborder="0" width="100%" height="560px"></iframe> |
|
|
|
|
|
#### Language-specific Contributors |
|
|
|
Below you can find a list of all the contributors to this release of the dataset for each language ❤️ |
|
|
|
<details> |
|
<summary>Detailed Contributor Statistics for each language</summary> |
|
|
|
|
|
|
|
### Bavarian (bar_Latn) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [stefan-it](https://huggingface.co/stefan-it) | 1000 | |
|
</details> |
|
|
|
|
|
|
|
### Danish (dan_Latn) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [rasgaard](https://huggingface.co/rasgaard) | 1000 | |
|
| [JakobBlaa](https://huggingface.co/JakobBlaa) | 978 | |
|
| [saattrupdan](https://huggingface.co/saattrupdan) | 200 | |
|
| [FrLars21](https://huggingface.co/FrLars21) | 80 | |
|
| [markhougaard](https://huggingface.co/markhougaard) | 72 | |
|
| [KennethEnevoldsen](https://huggingface.co/KennethEnevoldsen) | 44 | |
|
| [Apasalic](https://huggingface.co/Apasalic) | 33 | |
|
| [tqvist](https://huggingface.co/tqvist) | 33 | |
|
| [cnila](https://huggingface.co/cnila) | 31 | |
|
| [Soeren-B](https://huggingface.co/Soeren-B) | 28 | |
|
| [KristianL](https://huggingface.co/KristianL) | 22 | |
|
| [mathiasn1](https://huggingface.co/mathiasn1) | 16 | |
|
| [ITK-dev](https://huggingface.co/ITK-dev) | 12 | |
|
| [jannikskytt](https://huggingface.co/jannikskytt) | 8 | |
|
| [AndreasLH](https://huggingface.co/AndreasLH) | 7 | |
|
| [perlausten](https://huggingface.co/perlausten) | 5 | |
|
| [sorenmulli](https://huggingface.co/sorenmulli) | 3 | |
|
| [organicoder](https://huggingface.co/organicoder) | 1 | |
|
</details> |
|
|
|
|
|
|
|
### Egyptian Arabic (arz_Arab) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [mmhamdy](https://huggingface.co/mmhamdy) | 734 | |
|
| [aishahamdy](https://huggingface.co/aishahamdy) | 141 | |
|
| [oumayma03](https://huggingface.co/oumayma03) | 54 | |
|
| [omarelshehy](https://huggingface.co/omarelshehy) | 46 | |
|
| [ghada00](https://huggingface.co/ghada00) | 14 | |
|
| [heba1998](https://huggingface.co/heba1998) | 10 | |
|
| [chemouda](https://huggingface.co/chemouda) | 3 | |
|
| [aammari](https://huggingface.co/aammari) | 2 | |
|
| [amreleraqi](https://huggingface.co/amreleraqi) | 1 | |
|
</details> |
|
|
|
|
|
|
|
### Hindi (hin_Deva) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [catastropiyush](https://huggingface.co/catastropiyush) | 926 | |
|
| [pp](https://huggingface.co/pp) | 73 | |
|
| [Urmish](https://huggingface.co/Urmish) | 1 | |
|
</details> |
|
|
|
|
|
|
|
### Mandarin Chinese (cmn_Hani) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [paperplanedeemo](https://huggingface.co/paperplanedeemo) | 978 | |
|
| [guokan-shang](https://huggingface.co/guokan-shang) | 12 | |
|
| [AdinaY](https://huggingface.co/AdinaY) | 10 | |
|
</details> |
|
|
|
|
|
|
|
### Middle High German (gmh_Latn) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [stefan-it](https://huggingface.co/stefan-it) | 1000 | |
|
</details> |
|
|
|
|
|
|
|
### Moroccan Arabic (ary_Arab) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [Ihssane123](https://huggingface.co/Ihssane123) | 499 | |
|
| [imomayiz](https://huggingface.co/imomayiz) | 234 | |
|
| [NouhailaChab05](https://huggingface.co/NouhailaChab05) | 120 | |
|
| [nouamanetazi](https://huggingface.co/nouamanetazi) | 58 | |
|
| [master12gx](https://huggingface.co/master12gx) | 37 | |
|
| [oumayma03](https://huggingface.co/oumayma03) | 21 | |
|
| [Overowser](https://huggingface.co/Overowser) | 14 | |
|
| [SoufianeDahimi](https://huggingface.co/SoufianeDahimi) | 12 | |
|
| [adnananouzla](https://huggingface.co/adnananouzla) | 11 | |
|
| [alielfilali01](https://huggingface.co/alielfilali01) | 3 | |
|
| [staghado](https://huggingface.co/staghado) | 3 | |
|
| [olafdil](https://huggingface.co/olafdil) | 2 | |
|
| [maghwa](https://huggingface.co/maghwa) | 2 | |
|
| [0xTechVio](https://huggingface.co/0xTechVio) | 1 | |
|
| [maggierphunt](https://huggingface.co/maggierphunt) | 1 | |
|
</details> |
|
|
|
|
|
|
|
### Persian (fas_Arab) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [Maani](https://huggingface.co/Maani) | 985 | |
|
| [mehrdadazizi](https://huggingface.co/mehrdadazizi) | 14 | |
|
| [kargaranamir](https://huggingface.co/kargaranamir) | 1 | |
|
</details> |
|
|
|
|
|
|
|
### Russian (rus_Cyrl) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [kitano-o](https://huggingface.co/kitano-o) | 593 | |
|
| [kristaller486](https://huggingface.co/kristaller486) | 396 | |
|
| [knyazer](https://huggingface.co/knyazer) | 9 | |
|
| [alialek](https://huggingface.co/alialek) | 5 | |
|
</details> |
|
|
|
|
|
|
|
### Standard Arabic (arb_Arab) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [hasnachouikhi](https://huggingface.co/hasnachouikhi) | 1000 | |
|
| [alielfilali01](https://huggingface.co/alielfilali01) | 4 | |
|
</details> |
|
|
|
|
|
|
|
### Standard Arabic (arb_Arab) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [hasnachouikhi](https://huggingface.co/hasnachouikhi) | 865 | |
|
| [chemouda](https://huggingface.co/chemouda) | 102 | |
|
| [oumayma03](https://huggingface.co/oumayma03) | 12 | |
|
| [ahmedselhady](https://huggingface.co/ahmedselhady) | 9 | |
|
| [staghado](https://huggingface.co/staghado) | 7 | |
|
| [alielfilali01](https://huggingface.co/alielfilali01) | 4 | |
|
| [YassineL](https://huggingface.co/YassineL) | 2 | |
|
| [maggierphunt](https://huggingface.co/maggierphunt) | 1 | |
|
</details> |
|
|
|
|
|
|
|
### Standard Latvian (lvs_Latn) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [Aivis](https://huggingface.co/Aivis) | 894 | |
|
| [slckl](https://huggingface.co/slckl) | 48 | |
|
| [finnayeet](https://huggingface.co/finnayeet) | 33 | |
|
| [zemais](https://huggingface.co/zemais) | 26 | |
|
| [minem99](https://huggingface.co/minem99) | 2 | |
|
</details> |
|
|
|
|
|
|
|
### Tatar (tat_Cyrl) |
|
|
|
<details> |
|
<summary>User Statistics Table (Minimum 1 submissions)</summary> |
|
|
|
| Username | Submissions | |
|
|----------|------------| |
|
| [tagay1n](https://huggingface.co/tagay1n) | 515 | |
|
| [gaydmi](https://huggingface.co/gaydmi) | 313 | |
|
| [inov8](https://huggingface.co/inov8) | 126 | |
|
| [iamdweebish](https://huggingface.co/iamdweebish) | 42 | |
|
| [Giniyatullina](https://huggingface.co/Giniyatullina) | 6 | |
|
| [Empirenull](https://huggingface.co/Empirenull) | 3 | |
|
| [Khusaenov](https://huggingface.co/Khusaenov) | 1 | |
|
</details> |
|
|
|
|
|
|
|
</details> |
|
|
|
## Using this dataset |
|
|
|
The dataset has a `default` config that contains all the language and configs per language. |
|
|
|
To download the dataset using the Hugging Face `datasets` library, you can use the following code: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("data-is-better-together/fineweb-c-edu") |
|
``` |
|
|
|
To download a specific language, you can use the following code: |
|
|
|
```python |
|
dataset = load_dataset("data-is-better-together/fineweb-c-edu", language="cmn_Hani") |
|
``` |
|
|
|
You can also download the dataset using Pandas |
|
|
|
```python |
|
import pandas as pd |
|
|
|
# Login using e.g. `huggingface-cli login` to access this dataset |
|
df = pd.read_parquet("hf://datasets/data-is-better-together/fineweb-c-edu/arb_Arab/train-00000-of-00001.parquet") |
|
``` |
|
|
|
or polars |
|
|
|
```python |
|
|
|
import polars as pl |
|
|
|
# Login using e.g. `huggingface-cli login` to access this dataset |
|
df = pl.read_parquet('hf://datasets/davanstrien/fineweb-c-exported-data-test/arb_Arab/train-00000-of-00001.parquet') |
|
``` |
|
|
|
## Data Fields |
|
|
|
The dataset contains the following columns: |
|
|
|
| Column Name | Type | Description | |
|
| ----------------------------------- | ------------ | ---------------------------------------------------------------------------------------------- | |
|
| id | string | A unique identifier for each annotation record | |
|
| text | string | The text of the web page | |
|
| educational_value_labels | list[string] | A list of labels indicating the educational value of the web page rated by the community | |
|
| annotator_ids | string | A string ID for the annotator | |
|
| problematic_content_label_present | boolean | A flag indicating the presence of at leaste one 'problematic' label being assigned to the text | |
|
| problematic_content_label_agreement | float | The agreement of the annotator with the problematic content label | |
|
| language_names | str | The name of the language page | |
|
| language_code | str | The code of the language | |
|
| | | | |
|
|
|
The main things to note (we'll update this as we get more data) |
|
|
|
- Some languages already have multiple annotations per page. So far we haven't done any processing on these rows so people are free to calculate the agreement of the annotators in whatever way they want. |
|
- For languages with many active annotators, we may increase the overlap of annotations over time to further improve the quality of the dataset. |
|
- Some languages contain many `problematic content` labels. These often occur when the language detection was not correct. There is a `problematic_content_label_present` boolean column that indicates if the page contains at least one `problematic content` label. If you want to remove these rows you can do so by filtering on this column. Alternatively, you can use the `problematic_content_label_agreement` column to filter on the agreement of the annotators i.e. only remove rows where the annotators agree on the `problematic content` label. For many of the most active language efforts we're working with the community to improve the quality of the data so we hope the number of `problematic content` labels will decrease over time. |
|
|
|
|
|
## Licensing Information |
|
|
|
The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to CommonCrawl's Terms of Use. |
|
|
|
## Citation |
|
|
|
|
|
_Citation information needs to be added_ |
|
|
|
|
|
## Last Updated |
|
|
|
2024-12-20 |