Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1750,4 +1750,169 @@ configs:
|
|
1750 |
data_files:
|
1751 |
- split: train
|
1752 |
path: all/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1753 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1750 |
data_files:
|
1751 |
- split: train
|
1752 |
path: all/train-*
|
1753 |
+
task_categories:
|
1754 |
+
- text-generation
|
1755 |
+
language:
|
1756 |
+
- nl
|
1757 |
+
pretty_name: Common Corpus v2 NL
|
1758 |
+
size_categories:
|
1759 |
+
- 1M<n<10M
|
1760 |
---
|
1761 |
+
# Common Corpus v2 NL
|
1762 |
+
|
1763 |
+
This is a version of [Common Corpus v2](https://huggingface.co/datasets/PleIAs/common_corpus/) filtered to keep only the rows where `language` is `"Dutch"`.
|
1764 |
+
|
1765 |
+
Common Corpus is a very large open and permissible licensed text dataset created by [Pleias](https://pleias.fr).
|
1766 |
+
Please be sure to acknowledge the creators of the [original dataset](https://huggingface.co/datasets/PleIAs/common_corpus) when using this filtered version.
|
1767 |
+
|
1768 |
+
## Filtering
|
1769 |
+
Common Corpus is a collection of disparate datasets.
|
1770 |
+
Note that filtering the entire collection for rows where the `language` is `"Dutch"` is not the same as filtering for entire datasets that are supposed to be Dutch-language.
|
1771 |
+
Since language classification is an automated process,
|
1772 |
+
there may be false positives and false negatives.
|
1773 |
+
|
1774 |
+
Examples are:
|
1775 |
+
- **false positives**: code from `StackExchange`, French from `French-PD-diverse` are included in Common Corpus NL when they shouldn't be.
|
1776 |
+
- **false negatives**: rows from `Dutch-PD` that were misclassified, for example as English, are not included in Common Corpus NL when they should be.
|
1777 |
+
|
1778 |
+
If you want to use entire datasets, you either have to look up what the source was for including it into Common Corpus,
|
1779 |
+
or filter Common Corpus yourself.
|
1780 |
+
|
1781 |
+
## Usage
|
1782 |
+
```python
|
1783 |
+
from datasets import load_dataset
|
1784 |
+
|
1785 |
+
# load the full dataset
|
1786 |
+
dataset = load_dataset('Rijgersberg/common_corpus_nl', 'all')
|
1787 |
+
|
1788 |
+
# load only a specific subset
|
1789 |
+
wikipedia = load_dataset('Rijgersberg/common_corpus_nl', 'Open Web-Wikipedia')
|
1790 |
+
```
|
1791 |
+
|
1792 |
+
## Contents
|
1793 |
+
The dataset has the following content.
|
1794 |
+
Tokens are measured by the [robbert-2023-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-base) Dutch tokenizer on the `text` column only.
|
1795 |
+
`word_count` is taken directly from the `word_count` column of the dataset.
|
1796 |
+
|
1797 |
+
| collection | open_type | row_count | word_count | token_count |
|
1798 |
+
|:---------------------------|:----------------|------------:|-------------:|--------------:|
|
1799 |
+
| None | None | 2 | 729 | 1,685 |
|
1800 |
+
| Dutch-PD | Open Culture | 198,090 | 1,341,547,229 | 2,453,085,804 |
|
1801 |
+
| French-PD-diverse | Open Culture | 5,357 | 38,359,009 | 75,965,219 |
|
1802 |
+
| Gutenberg | Open Culture | 4,844 | 44,405,784 | 74,446,253 |
|
1803 |
+
| US-PD-Books | Open Culture | 3,714 | 23,298,108 | 53,772,090 |
|
1804 |
+
| Multilingual-PD | Open Culture | 3,164 | 22,935,496 | 45,033,262 |
|
1805 |
+
| English-PD | Open Culture | 2,622 | 18,344,861 | 38,407,512 |
|
1806 |
+
| German-PD | Open Culture | 1,744 | 12,434,468 | 26,319,336 |
|
1807 |
+
| US-PD-Newspapers | Open Culture | 1,478 | 5,297,259 | 11,335,348 |
|
1808 |
+
| Latin-PD | Open Culture | 637 | 4,406,870 | 9,623,548 |
|
1809 |
+
| LoC-PD-Books | Open Culture | 480 | 3,384,886 | 6,359,398 |
|
1810 |
+
| Italian-PD | Open Culture | 253 | 1,767,994 | 4,185,808 |
|
1811 |
+
| French-PD-Books | Open Culture | 195 | 1,462,312 | 3,326,121 |
|
1812 |
+
| Europeana | Open Culture | 126 | 751,961 | 2,018,302 |
|
1813 |
+
| Spanish-PD-Books | Open Culture | 114 | 819,389 | 1,831,298 |
|
1814 |
+
| French-PD-Newspapers | Open Culture | 117 | 589,899 | 1,352,344 |
|
1815 |
+
| Danish-PD | Open Culture | 36 | 260,479 | 578,291 |
|
1816 |
+
| Spanish-PD-Newspapers | Open Culture | 34 | 221,839 | 533,701 |
|
1817 |
+
| German-PD-Newspapers | Open Culture | 27 | 155,979 | 398,229 |
|
1818 |
+
| NewZealand-PD-Newspapers | Open Culture | 70 | 135,552 | 348,682 |
|
1819 |
+
| Polish-PD | Open Culture | 6 | 37,573 | 135,721 |
|
1820 |
+
| Portuguese-PD | Open Culture | 5 | 35,666 | 84,105 |
|
1821 |
+
| Greek-PD | Open Culture | 1 | 11,084 | 23,343 |
|
1822 |
+
| BNL Newspapers (1841-1879) | Open Culture | 42 | 6,409 | 17,658 |
|
1823 |
+
| Wikisource | Open Culture | 1 | 36 | 58 |
|
1824 |
+
| Eurlex | Open Government | 269,340 | 948,031,137 | 2,104,235,522 |
|
1825 |
+
| Eurovoc | Open Government | 46,006 | 480,170,115 | 971,763,805 |
|
1826 |
+
| French Open Data | Open Government | 228,097 | 211,210,103 | 546,913,250 |
|
1827 |
+
| Marianne-Europe | Open Government | 10,101 | 44,308,181 | 113,369,293 |
|
1828 |
+
| TEDEUTenders | Open Government | 5,105 | 3,423,351 | 8,123,266 |
|
1829 |
+
| USPTO | Open Government | 40 | 410,010 | 1,505,353 |
|
1830 |
+
| UN-Digital-Library | Open Government | 19 | 95,691 | 436,028 |
|
1831 |
+
| WTO | Open Government | 11 | 55,733 | 125,785 |
|
1832 |
+
| Court Listener | Open Government | 243 | 22,176 | 82,334 |
|
1833 |
+
| OECD | Open Government | 5 | 30,895 | 56,160 |
|
1834 |
+
| Caselaw Access Project | Open Government | 450 | 13,344 | 33,405 |
|
1835 |
+
| GATT_library | Open Government | 1 | 282 | 1,112 |
|
1836 |
+
| OpenAlex | Open Science | 11,867 | 76,200,223 | 142,457,454 |
|
1837 |
+
| French-Science-Pile | Open Science | 1,485 | 6,197,572 | 18,685,715 |
|
1838 |
+
| Open-Science-Pile | Open Science | 1,199 | 4,711,962 | 8,011,769 |
|
1839 |
+
| German-Science-Pile | Open Science | 985 | 4,234,708 | 7,488,555 |
|
1840 |
+
| Spanish-Science-Pile | Open Science | 163 | 1,071,826 | 2,263,934 |
|
1841 |
+
| Wikipedia | Open Web | 2,135,977 | 367,689,443 | 634,440,794 |
|
1842 |
+
| StackExchange | Open Web | 270,147 | 117,494,333 | 464,336,069 |
|
1843 |
+
| Youtube-Commons | Open Web | 1,982 | 5,886,772 | 8,329,426 |
|
1844 |
+
|
1845 |
+
|
1846 |
+
## Code
|
1847 |
+
|
1848 |
+
In principle it is very easy to create Common Corpus NL by filtering Common Corpus using Hugging Face datasets' `dataset.filter()` functionality.
|
1849 |
+
However, Common Corpus is larger than my available disk space.
|
1850 |
+
|
1851 |
+
A possible solution is to process Common Corpus streaming and relying on the fact that the Dutch subset will be much, much smaller than the full dataset.
|
1852 |
+
The code for that solution is below.
|
1853 |
+
However, I was having trouble streaming the entire dataset without any connection errors along the way.
|
1854 |
+
```python
|
1855 |
+
from datasets import load_dataset, Dataset
|
1856 |
+
|
1857 |
+
|
1858 |
+
common_corpus = load_dataset('PleIAs/common_corpus', split='train', streaming=True)
|
1859 |
+
|
1860 |
+
def nl():
|
1861 |
+
for row in common_corpus:
|
1862 |
+
if row['language'] == 'Dutch':
|
1863 |
+
yield row
|
1864 |
+
|
1865 |
+
common_corpus_nl = Dataset.from_generator(nl)
|
1866 |
+
|
1867 |
+
common_corpus_nl.push_to_hub('Rijgersberg/common_corpus_nl')
|
1868 |
+
```
|
1869 |
+
|
1870 |
+
Therefore I took the approach for every one of ten subfolders of Common Corpus:
|
1871 |
+
|
1872 |
+
- downloading the subfolder in a fault-tolerant way
|
1873 |
+
- doing the filtering to Dutch rows only
|
1874 |
+
- uploading that by itself to the Hugging Face hub
|
1875 |
+
- deleting all the downloaded files and datasets cache files (around 1.5 TB for every subfolder)
|
1876 |
+
|
1877 |
+
Then finally I concatenated the ten Dutch datasets into a single one, which is the one you are looking at.
|
1878 |
+
|
1879 |
+
```python
|
1880 |
+
import shutil
|
1881 |
+
|
1882 |
+
from datasets import concatenate_datasets, load_dataset
|
1883 |
+
from huggingface_hub import snapshot_download
|
1884 |
+
from huggingface_hub.errors import LocalEntryNotFoundError
|
1885 |
+
from requests import ReadTimeout
|
1886 |
+
|
1887 |
+
|
1888 |
+
local_dir = '/path/to/downloadfolder/commoncorpus'
|
1889 |
+
|
1890 |
+
for i in range(1, 10+1):
|
1891 |
+
success = False
|
1892 |
+
while not success:
|
1893 |
+
try:
|
1894 |
+
# download one common corpus folder at a time to a local directory
|
1895 |
+
snapshot_download( # will skip files that have already been downloaded
|
1896 |
+
repo_id='PleIAs/common_corpus',
|
1897 |
+
repo_type='dataset',
|
1898 |
+
allow_patterns=f'common_corpus_{i}/*',
|
1899 |
+
local_dir=local_dir
|
1900 |
+
)
|
1901 |
+
success = True
|
1902 |
+
except (LocalEntryNotFoundError, ReadTimeout) as e:
|
1903 |
+
print(e)
|
1904 |
+
|
1905 |
+
subdataset = load_dataset(local_dir, split='train')
|
1906 |
+
subdataset = subdataset.filter(lambda lang: lang == 'Dutch', input_columns=['language'])
|
1907 |
+
|
1908 |
+
subdataset.push_to_hub(f'Rijgersberg/common_corpus_nl_{i}')
|
1909 |
+
|
1910 |
+
# remove local copies of the data to free up disk space
|
1911 |
+
shutil.rmtree(local_dir)
|
1912 |
+
shutil.rmtree('/path/to/cache/huggingface/datasets/commoncorpus')
|
1913 |
+
|
1914 |
+
# concatenate all (much smaller) Dutch datasets into a single dataset
|
1915 |
+
common_corpus_nl = concatenate_datasets([load_dataset(f'Rijgersberg/common_corpus_nl_{i}', split='train')
|
1916 |
+
for i in range(1, 10+1)])
|
1917 |
+
common_corpus_nl.push_to_hub('Rijgersberg/common_corpus_nl')
|
1918 |
+
```
|