Datasets:

Modalities:
Tabular
Text
ArXiv:
License:
kargaranamir commited on
Commit
63ee51f
1 Parent(s): 33cc056

add instruction.

Browse files
Files changed (1) hide show
  1. README.md +30 -14
README.md CHANGED
@@ -6446,38 +6446,54 @@ The current version is filtered based on adopted filters from C4, CCNet, MADLAD-
6446
  We release our pipeline as open-source at [https://github.com/cisnlp/GlotCC](https://github.com/cisnlp/GlotCC).
6447
 
6448
 
6449
- ### Usage (Hugging Face Dataloader)
 
 
6450
 
6451
- Replace `eng-Latn` with your specific language.
6452
 
6453
  ```python
6454
- !pip install datasets
 
 
 
 
 
 
6455
  ```
6456
 
 
 
 
 
 
6457
  ```python
6458
- from datasets import load_dataset
6459
 
6460
- dataset = load_dataset('cis-lmu/GlotCC-V1', 'eng-Latn')
6461
- print(dataset['train'][0]) # First row of eng-Latn
6462
  ```
6463
 
6464
- ### Download
6465
 
6466
- If you prefer not to use the Hugging Face dataloader or are interested in a specific language, you can download it directly. For example, to download the first file of `eng-Latn`:
6467
 
6468
  ```python
6469
- !wget https://huggingface.co/datasets/cis-lmu/GlotCC-V1/resolve/main/v1.0/eng-Latn/eng-Latn_0.parquet
 
 
 
 
6470
  ```
6471
 
6472
- Then you can load it with any library that supports Parquet files, such as Pandas:
6473
 
6474
- ```python
6475
- import pandas as pd
6476
 
6477
- dataset = pd.read_parquet('/path/to/eng-Latn_0.parquet')
6478
- print(dataset.head())
 
 
6479
  ```
6480
 
 
6481
  ## Additional Information
6482
 
6483
  The dataset is currently heavily under audit and changes accordingly.
 
6446
  We release our pipeline as open-source at [https://github.com/cisnlp/GlotCC](https://github.com/cisnlp/GlotCC).
6447
 
6448
 
6449
+ ### Usage (Huggingface Hub - Recommended)
6450
+
6451
+ Replace `bal-Arab` with your specific language.
6452
 
 
6453
 
6454
  ```python
6455
+ from huggingface_hub import snapshot_download
6456
+ folder = snapshot_download(
6457
+ "cis-lmu/glotcc-v1",
6458
+ repo_type="dataset",
6459
+ local_dir="./path/to/glotcc-v1/",
6460
+ # replace "v1.0/bal-Arab/*" with any other language path
6461
+ allow_patterns="v1.0/bal-Arab/*")
6462
  ```
6463
 
6464
+
6465
+ For faster downloads, make sure to `pip install huggingface_hub[hf_transfer]` and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER`=1.
6466
+
6467
+ Then you can load it with any library that supports Parquet files, such as Pandas:
6468
+
6469
  ```python
6470
+ import pandas as pd
6471
 
6472
+ dataset = pd.read_parquet('./path/to/glotcc-v1/v1.0/bal-Arab/bal-Arab_0.parquet')
6473
+ print(dataset.head())
6474
  ```
6475
 
6476
+ ### Usage (Huggingface datasets)
6477
 
 
6478
 
6479
  ```python
6480
+ from datasets import load_dataset
6481
+ # replace name="bal-Arab" to any other language
6482
+ fw = load_dataset("cis-lmu/glotcc-v1", name="bal-Arab", split="train", streaming=True)
6483
+
6484
+ print(next(iter(fw))) # print the next from the iterator from the dataset
6485
  ```
6486
 
 
6487
 
6488
+ ### Usage (direct download)
 
6489
 
6490
+ If you prefer not to use the Hugging Face datasets or hub you can download it directly. For example, to download the first file of `bal-Arab`:
6491
+
6492
+ ```python
6493
+ !wget https://huggingface.co/datasets/cis-lmu/GlotCC-V1/resolve/main/v1.0/bal-Arab/bal-Arab_0.parquet
6494
  ```
6495
 
6496
+
6497
  ## Additional Information
6498
 
6499
  The dataset is currently heavily under audit and changes accordingly.