damerajee commited on
Commit
26206b1
·
verified ·
1 Parent(s): f3c2763

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -21
README.md CHANGED
@@ -1,21 +1,58 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: doc_id
5
- dtype: string
6
- - name: type
7
- dtype: string
8
- - name: text
9
- dtype: string
10
- splits:
11
- - name: train
12
- num_bytes: 25324509618
13
- num_examples: 806930
14
- download_size: 9419131940
15
- dataset_size: 25324509618
16
- configs:
17
- - config_name: default
18
- data_files:
19
- - split: train
20
- path: data/train-*
21
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: doc_id
5
+ dtype: string
6
+ - name: type
7
+ dtype: string
8
+ - name: text
9
+ dtype: string
10
+ splits:
11
+ - name: train
12
+ num_bytes: 25324509618
13
+ num_examples: 806930
14
+ download_size: 9419131940
15
+ dataset_size: 25324509618
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: data/train-*
21
+ license: cc-by-4.0
22
+ task_categories:
23
+ - text-generation
24
+ language:
25
+ - hi
26
+ - en
27
+ pretty_name: 'long-context '
28
+ size_categories:
29
+ - 100K<n<1M
30
+ ---
31
+
32
+ # Dataset
33
+
34
+ This dataset was filtered from AI4BHarat dataset [sangraha](https://huggingface.co/datasets/ai4bharat/sangraha),which is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.
35
+
36
+ This dataset only Hindi as of now
37
+
38
+ # Information
39
+ * First this dataset is mainly for long context training
40
+ * The minimum len is and maximum len is
41
+
42
+ # Getting started
43
+
44
+ For downloading the entire dataset:
45
+ ```
46
+ from datasets import load_dataset
47
+
48
+ dataset = load_dataset("damerajee/long_context_hindi")
49
+ ```
50
+ If dataset is too big you can simply stream:
51
+ ```
52
+ from datasets import load_dataset
53
+
54
+ dataset = load_dataset("damerajee/long_context_hindi",split='train',streaming=True)
55
+ ```
56
+ ```
57
+ dataset.take(2)
58
+ ```