Datasets:
SLPL
/

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
sadrasabouri commited on
Commit
f2674a9
1 Parent(s): 6b8da7c

add : hard/internet shortage section added.

Browse files
Files changed (1) hide show
  1. README.md +36 -2
README.md CHANGED
@@ -56,8 +56,6 @@ from datasets import load_dataset
56
 
57
  dataset = load_dataset("SLPL/naab")
58
  ```
59
- _Note: be sure that your machine has at least 130 GB free space, also it may take a while to download._
60
-
61
  You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
62
  ```python
63
  from datasets import load_dataset
@@ -65,6 +63,42 @@ from datasets import load_dataset
65
  dataset = load_dataset("SLPL/naab", split="train[:10%]")
66
  ```
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ### Supported Tasks and Leaderboards
69
 
70
  This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
 
56
 
57
  dataset = load_dataset("SLPL/naab")
58
  ```
 
 
59
  You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
60
  ```python
61
  from datasets import load_dataset
 
63
  dataset = load_dataset("SLPL/naab", split="train[:10%]")
64
  ```
65
 
66
+ **Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage you can you below code snippet helping you download your costume sections of the naab:**
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ # ==========================================================
72
+ # You should just change this part in order to download your
73
+ # parts of corpus.
74
+ indices = {
75
+ "train": [5, 1, 2],
76
+ "test": [0, 2]
77
+ }
78
+ # ==========================================================
79
+
80
+
81
+ N_FILES = {
82
+ "train": 126,
83
+ "test": 3
84
+ }
85
+ _BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/"
86
+ data_url = {
87
+ "train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])],
88
+ "test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])],
89
+ }
90
+ for index in indices['train']:
91
+ assert index < N_FILES['train']
92
+ for index in indices['test']:
93
+ assert index < N_FILES['test']
94
+ data_files = {
95
+ "train": [data_url['train'][i] for i in indices['train']],
96
+ "test": [data_url['test'][i] for i in indices['test']]
97
+ }
98
+ print(data_files)
99
+ dataset = load_dataset('text', data_files=data_files, use_auth_token=True)
100
+ ```
101
+
102
  ### Supported Tasks and Leaderboards
103
 
104
  This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.