--- license: cc-by-sa-3.0 --- # French Wikipedia Dataset ## Overview This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the [official French Wikipedia site](https://fr.wikipedia.org/) on September 24, 2023. It aims to address issues with existing Wikipedia datasets, such as poorly parsed text missing essential information like dates and locations. ## Format - **Type**: Text - **File Extension**: `.txt` ## Structure The dataset is divided into the following splits: - `train.txt`: 90% - `test.txt` : 5% - `valid.txt`: 5% Each article in the dataset exceeds 1200 characters in length. ## Data Cleaning and Preprocessing The following elements have been excluded from the dataset: - H1 - H4 Headings - Lists - Tables - Sources and References The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present. ## Exploring the Dataset You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display. ## Additional Information This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.