Mathlesage commited on
Commit
7f4d83a
·
verified ·
1 Parent(s): 90f8428

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -21
README.md CHANGED
@@ -1,23 +1,61 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: page_id
5
- dtype: int64
6
- - name: title
7
- dtype: string
8
- - name: cleaned_text
9
- dtype: string
10
- - name: linked_titles
11
- sequence: string
12
- splits:
13
- - name: train
14
- num_bytes: 33692672890
15
- num_examples: 12763591
16
- download_size: 19251843861
17
- dataset_size: 33692672890
18
- configs:
19
- - config_name: default
20
- data_files:
21
- - split: train
22
- path: data/train-*
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ license: cc-by-sa-3.0
4
+ tags:
5
+ - wikipedia
6
+ - text-corpus
7
+ - unsupervised
8
+ pretty_name: French Wikipedia Corpus (April 20, 2025)
9
+ size_categories:
10
+ - +1B
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
+
13
+ # French Wikipedia Corpus - Snapshot of April 20, 2025
14
+
15
+ ## Dataset Description
16
+
17
+ This dataset contains a complete snapshot of the French-language Wikipedia encyclopedia, as it existed on April 20, 2025. It includes the latest version of each page, with its raw text content, the titles of linked pages, as well as a unique identifier.
18
+
19
+ The text of each article retains the MediaWiki formatting structure for titles (`== Section Title ==`), subtitles (`=== Subtitle ===`), and so on. This makes it particularly useful for tasks that can benefit from the document's hierarchical structure.
20
+
21
+ This corpus is ideal for training language models, information retrieval, question-answering, and any other Natural Language Processing (NLP) research requiring a large amount of structured, encyclopedic text.
22
+
23
+ ## Dataset Structure
24
+
25
+ ### Data Fields
26
+
27
+ The dataset is composed of the following columns:
28
+
29
+ * **`id`** (string): A unique identifier for each article (e.g., the Wikipedia page ID).
30
+ * **`title`** (string): The title of the Wikipedia article.
31
+ * **`text`** (string): The full text content of the article. The section structure is preserved with the `==`, `===`, `====`, etc. syntax.
32
+ * **`linked_titles`** (list of strings): A list containing the titles of other Wikipedia articles that are linked from the `text` field.
33
+
34
+ ### Data Splits
35
+
36
+ The dataset contains only one split: `train`, which includes all the articles from the dump.
37
+
38
+ ## Usage
39
+
40
+ You can easily load and use this dataset with the Hugging Face `datasets` library.
41
+
42
+ ```python
43
+ from datasets import load_dataset
44
+
45
+ # Load the dataset
46
+ dataset = load_dataset("OrdalieTech/wiki_fr")
47
+
48
+ # Display information about the dataset
49
+ print(dataset)
50
+ # >>> DatasetDict({
51
+ # >>> train: Dataset({
52
+ # >>> features: ['id', 'title', 'text', 'linked_titles'],
53
+ # >>> num_rows: 2700000 # Example
54
+ # >>> })
55
+ # >>> })
56
+
57
+ # Access an example
58
+ first_article = dataset['train'][0]
59
+ print("Title:", first_article['title'])
60
+ print("\nText excerpt:", first_article['text'][:500])
61
+ print("\nLinked titles:", first_article['linked_titles'][:5])