fineweb-c / README.md
davanstrien's picture
davanstrien HF staff
Upload dataset
f65bfc2 verified
|
raw
history blame
27.6 kB
metadata
dataset_info:
  - config_name: arb_Arab
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 4913929
        num_examples: 1000
    download_size: 2381622
    dataset_size: 4913929
  - config_name: ary_Arab
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 3086740
        num_examples: 1000
    download_size: 1515329
    dataset_size: 3086740
  - config_name: arz_Arab
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 3175887
        num_examples: 1000
    download_size: 1543207
    dataset_size: 3175887
  - config_name: bar_Latn
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 2494628
        num_examples: 1000
    download_size: 1517640
    dataset_size: 2494628
  - config_name: cmn_Hani
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 4075430
        num_examples: 1000
    download_size: 2925797
    dataset_size: 4075430
  - config_name: dan
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 3968961
        num_examples: 1000
    download_size: 2315299
    dataset_size: 3968961
  - config_name: dan_Latn
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 3978961
        num_examples: 1000
    download_size: 2315349
    dataset_size: 3978961
  - config_name: default
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 73884945
        num_examples: 13000
    download_size: 38830555
    dataset_size: 73884945
  - config_name: fas_Arab
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 5759890
        num_examples: 1000
    download_size: 2662440
    dataset_size: 5759890
  - config_name: gmh_Latn
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 16120134
        num_examples: 1000
    download_size: 9109369
    dataset_size: 16120134
  - config_name: hin_Deva
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 6238691
        num_examples: 1000
    download_size: 2358281
    dataset_size: 6238691
  - config_name: lvs
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 4598981
        num_examples: 1000
    download_size: 2807485
    dataset_size: 4598981
  - config_name: lvs_Latn
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 4608981
        num_examples: 1000
    download_size: 2807535
    dataset_size: 4608981
  - config_name: rus_Cyrl
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 9674640
        num_examples: 1000
    download_size: 4687716
    dataset_size: 9674640
  - config_name: tat_Cyrl
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: educational_value_labels
        sequence: string
      - name: annotator_ids
        sequence: string
      - name: problematic_content_label_present
        dtype: bool
      - name: problematic_content_label_agreement
        dtype: float64
      - name: language_names
        dtype: string
      - name: language_code
        dtype: string
    splits:
      - name: train
        num_bytes: 6697853
        num_examples: 1000
    download_size: 3270919
    dataset_size: 6697853
configs:
  - config_name: arb_Arab
    data_files:
      - split: train
        path: arb_Arab/train-*
  - config_name: ary_Arab
    data_files:
      - split: train
        path: ary_Arab/train-*
  - config_name: arz_Arab
    data_files:
      - split: train
        path: arz_Arab/train-*
  - config_name: bar_Latn
    data_files:
      - split: train
        path: bar_Latn/train-*
  - config_name: cmn_Hani
    data_files:
      - split: train
        path: cmn_Hani/train-*
  - config_name: dan
    data_files:
      - split: train
        path: dan/train-*
  - config_name: dan_Latn
    data_files:
      - split: train
        path: dan_Latn/train-*
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
  - config_name: fas_Arab
    data_files:
      - split: train
        path: fas_Arab/train-*
  - config_name: gmh_Latn
    data_files:
      - split: train
        path: gmh_Latn/train-*
  - config_name: hin_Deva
    data_files:
      - split: train
        path: hin_Deva/train-*
  - config_name: lvs
    data_files:
      - split: train
        path: lvs/train-*
  - config_name: lvs_Latn
    data_files:
      - split: train
        path: lvs_Latn/train-*
  - config_name: rus_Cyrl
    data_files:
      - split: train
        path: rus_Cyrl/train-*
  - config_name: tat_Cyrl
    data_files:
      - split: train
        path: tat_Cyrl/train-*
tags:
  - argilla
  - data-is-better-together
task_categories: []
language:
  - lvs
  - fas
  - dan
  - arz
  - ary
  - arb
  - tat
  - rus
  - gmh
  - bar
  - hin
  - arb
  - cmn

FineWeb-C: Educational content in many languages, labelled by the community

FineWeb 2: A sparkling update with 1000s of languages

Multilingual data is better together!

Note: This datasets and the dataset card are works in progress. You can help contribute to the dataset here and join the community discussions in rocket chat!

What is this?

This is a collaborative, community-driven project that expands upon the FineWeb2 dataset. Our goal is to create high-quality educational content annotations across hundreds of languages.

By enhancing web content with these annotations, we aim to improve the development of Large Language Models (LLMs) in all languages, making AI technology more accessible and effective globally.

The annotations in this dataset will help train AI systems to automatically identify high-quality educational content in more languages and in turn help build better Large Language Models for all languages.

What the community is doing:

  • For a given language, look at a page of web content from the FineWeb2 dataset in Argilla.
  • Rate how educational the content is.
  • Flag problematic content i.e. content that is malformed or in the wrong language.

Once a language reaches 1,000 annotations, the dataset will be included in this dataset! Alongside rating the educational quality of the content, different language communities are discussing other ways to improve the quality of data for their language in our rocket chat discussion channel.

What's been done so far?

So far 318 members of the Hugging Face community have submitted 32,863 annotations.

The following languages have reached the 1,000 annotation threshold to be included in the dataset. We'll keep updating this dataset as more annotations are added!

Language Code Language Name Completed Annotations Annotators
arb_Arab Standard Arabic 1000 10
ary_Arab Moroccan Arabic 1000 15
arz_Arab Egyptian Arabic 1000 9
bar_Latn Bavarian 1000 1
cmn_Hani Mandarin Chinese 1000 3
dan Danish 1000 18
fas_Arab Persian 1000 3
gmh_Latn Middle High German 1000 1
hin_Deva Hindi 1000 3
lvs_Latn Standard Latvian 1000 5
rus_Cyrl Russian 1000 4
tat_Cyrl Tatar 1000 7

You can help contribute to the dataset here.

Below is an overview of the number of annotations submitted for each language (updated daily).

Why are we doing this?

There are many languages in the world where no high quality LLMs exist. Having high quality data is a central part of building high quality LLMs. FineWeb2 is a crucial step in improving the availability of high quality data for many languages. We plan to go a step further.

Fineweb-Edu for every language?

FineWeb-Edu is a dataset built on the original FineWeb dataset. The dataset was constructed by developing an educational quality classifier using annotations generated by LLama3-70B-Instruct and using this classifier to retain only the most educational web pages.

FineWeb-Edu outperforms FineWeb on popular benchmark. Crucially, using this approach reduces the amount of data needed to train a high quality LLM reducing the barrier to building a high quality LLM for many languages.

We want to make it possible to build FineWeb-Edu datasets for all the worlds languages. To do this we need annotations in order to train an educational quality classifier.

This in turn will allow us to build the next generation of Large Language Models for many languages.

Why not use LLMs to annotate the data?

For high resources languages, using an LLM to generate educational quality annotations can be a good solution. However, for many languages LLMs are not able to generate high quality annotations — or we don't have enough data to validate whether the annotations are correct.

How can I help?

You can help by contributing to the dataset here and join the community discussions in rocket chat!

Why would I bother to contribute to this dataset?

Your contributions directly shape the future of AI in your language. Here's why this matters:

  1. Break the AI language barrier: Most commercial AI companies focus on profitable languages, leaving many communities behind. Your work helps bring AI capabilities to more languages.

  2. Keep it open: Unlike proprietary datasets locked away by companies, FineWeb2-C is an open dataset. This means anyone can use it to build AI systems that truly serve their community's needs. Through this open approach we also learn about which approaches work best for different languages.

  3. Be part of something bigger: Just as Wikipedia showed how volunteers can build invaluable resources, the Hugging Face community has created numerous open models and datasets. You're joining a movement to democratize AI technology.

Every annotation counts. Whether you can contribute ten minutes or ten hours, your input helps build a more inclusive future for AI technology 🤗

Who contributed to this dataset so far?

These are the top 10 contributors to this release of the dataset. Make sure to give them a follow on the Hub to show your appreciation!

Hugging Face Username Submissions
stefan-it 2,011
hasnachouikhi 1,865
catastropiyush 1,053
vikkormallansohn 1,000
rasgaard 1,000
Maani 985
paperplanedeemo 978
JakobBlaa 978
anhha9 927
Aivis 894

Data work is the under appreciated foundation of AI and ML. This dataset is built by the community for the community. Below is a leaderboard that is updated daily and shows all the contributors to this annotation effort.

Language-specific Contributors

Below you can find a list of all the contributors to this release of the dataset for each language ❤️

Detailed Contributor Statistics for each language

Bavarian (bar_Latn)

User Statistics Table (Minimum 1 submissions)
Username Submissions
stefan-it 1000

Danish (dan)

User Statistics Table (Minimum 1 submissions)

Egyptian Arabic (arz_Arab)

User Statistics Table (Minimum 1 submissions)
Username Submissions
mmhamdy 734
aishahamdy 141
oumayma03 54
omarelshehy 46
ghada00 14
heba1998 10
chemouda 3
aammari 2
amreleraqi 1

Hindi (hin_Deva)

User Statistics Table (Minimum 1 submissions)
Username Submissions
catastropiyush 926
pp 73
Urmish 1

Mandarin Chinese (cmn_Hani)

User Statistics Table (Minimum 1 submissions)
Username Submissions
paperplanedeemo 978
guokan-shang 12
AdinaY 10

Middle High German (gmh_Latn)

User Statistics Table (Minimum 1 submissions)
Username Submissions
stefan-it 1000

Moroccan Arabic (ary_Arab)

User Statistics Table (Minimum 1 submissions)

Persian (fas_Arab)

User Statistics Table (Minimum 1 submissions)
Username Submissions
Maani 985
mehrdadazizi 14
kargaranamir 1

Russian (rus_Cyrl)

User Statistics Table (Minimum 1 submissions)
Username Submissions
kitano-o 593
kristaller486 396
knyazer 9
alialek 5

Standard Arabic (arb_Arab)

User Statistics Table (Minimum 1 submissions)
Username Submissions
hasnachouikhi 1000
alielfilali01 4

Standard Arabic (arb_Arab)

User Statistics Table (Minimum 1 submissions)

Standard Latvian (lvs_Latn)

User Statistics Table (Minimum 1 submissions)
Username Submissions
Aivis 894
slckl 48
finnayeet 33
zemais 26
minem99 2

Tatar (tat_Cyrl)

User Statistics Table (Minimum 1 submissions)
Username Submissions
tagay1n 515
gaydmi 313
inov8 126
iamdweebish 42
Giniyatullina 6
Empirenull 3
Khusaenov 1

Using this dataset

The dataset has a default config that contains all the language and configs per language.

To download the dataset using the Hugging Face datasets library, you can use the following code:

from datasets import load_dataset

dataset = load_dataset("data-is-better-together/fineweb-c-edu")

To download a specific language, you can use the following code:

dataset = load_dataset("data-is-better-together/fineweb-c-edu", language="cmn_Hani")

You can also download the dataset using Pandas

import pandas as pd

# Login using e.g. `huggingface-cli login` to access this dataset
df = pd.read_parquet("hf://datasets/data-is-better-together/fineweb-c-edu/arb_Arab/train-00000-of-00001.parquet")

or polars


import polars as pl

# Login using e.g. `huggingface-cli login` to access this dataset
df = pl.read_parquet('hf://datasets/davanstrien/fineweb-c-exported-data-test/arb_Arab/train-00000-of-00001.parquet')

Data Fields

The dataset contains the following columns:

Column Name Type Description
id string A unique identifier for each annotation record
text string The text of the web page
educational_value_labels list[string] A list of labels indicating the educational value of the web page rated by the community
annotator_ids string A string ID for the annotator
problematic_content_label_present boolean A flag indicating the presence of at leaste one 'problematic' label being assigned to the text
problematic_content_label_agreement float The agreement of the annotator with the problematic content label
language_names str The name of the language page
language_code str The code of the language

The main things to note (we'll update this as we get more data)

  • Some languages already have multiple annotations per page. So far we haven't done any processing on these rows so people are free to calculate the agreement of the annotators in whatever way they want.
  • For languages with many active annotators, we may increase the overlap of annotations over time to further improve the quality of the dataset.
  • Some languages contain many problematic content labels. These often occur when the language detection was not correct. There is a problematic_content_label_present boolean column that indicates if the page contains at least one problematic content label. If you want to remove these rows you can do so by filtering on this column. Alternatively, you can use the problematic_content_label_agreement column to filter on the agreement of the annotators i.e. only remove rows where the annotators agree on the problematic content label. For many of the most active language efforts we're working with the community to improve the quality of the data so we hope the number of problematic content labels will decrease over time.

Licensing Information

The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to CommonCrawl's Terms of Use.

Citation

Citation information needs to be added

Last Updated

2024-12-20