wiki_en / README.md
Mathlesage's picture
Update README.md
7f4d83a verified
metadata
language: en
license: cc-by-sa-3.0
tags:
  - wikipedia
  - text-corpus
  - unsupervised
pretty_name: French Wikipedia Corpus (April 20, 2025)
size_categories:
  - +1B

French Wikipedia Corpus - Snapshot of April 20, 2025

Dataset Description

This dataset contains a complete snapshot of the French-language Wikipedia encyclopedia, as it existed on April 20, 2025. It includes the latest version of each page, with its raw text content, the titles of linked pages, as well as a unique identifier.

The text of each article retains the MediaWiki formatting structure for titles (== Section Title ==), subtitles (=== Subtitle ===), and so on. This makes it particularly useful for tasks that can benefit from the document's hierarchical structure.

This corpus is ideal for training language models, information retrieval, question-answering, and any other Natural Language Processing (NLP) research requiring a large amount of structured, encyclopedic text.

Dataset Structure

Data Fields

The dataset is composed of the following columns:

  • id (string): A unique identifier for each article (e.g., the Wikipedia page ID).
  • title (string): The title of the Wikipedia article.
  • text (string): The full text content of the article. The section structure is preserved with the ==, ===, ====, etc. syntax.
  • linked_titles (list of strings): A list containing the titles of other Wikipedia articles that are linked from the text field.

Data Splits

The dataset contains only one split: train, which includes all the articles from the dump.

Usage

You can easily load and use this dataset with the Hugging Face datasets library.

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("OrdalieTech/wiki_fr")

# Display information about the dataset
print(dataset)
# >>> DatasetDict({
# >>>     train: Dataset({
# >>>         features: ['id', 'title', 'text', 'linked_titles'],
# >>>         num_rows: 2700000 # Example
# >>>     })
# >>> })

# Access an example
first_article = dataset['train'][0]
print("Title:", first_article['title'])
print("\nText excerpt:", first_article['text'][:500])
print("\nLinked titles:", first_article['linked_titles'][:5])