damerajee commited on
Commit
3144f8d
·
verified ·
1 Parent(s): 26206b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -34
README.md CHANGED
@@ -1,33 +1,33 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: doc_id
5
- dtype: string
6
- - name: type
7
- dtype: string
8
- - name: text
9
- dtype: string
10
- splits:
11
- - name: train
12
- num_bytes: 25324509618
13
- num_examples: 806930
14
- download_size: 9419131940
15
- dataset_size: 25324509618
16
- configs:
17
- - config_name: default
18
- data_files:
19
- - split: train
20
- path: data/train-*
21
- license: cc-by-4.0
22
- task_categories:
23
- - text-generation
24
- language:
25
- - hi
26
- - en
27
- pretty_name: 'long-context '
28
- size_categories:
29
- - 100K<n<1M
30
- ---
31
 
32
  # Dataset
33
 
@@ -42,17 +42,16 @@ This dataset only Hindi as of now
42
  # Getting started
43
 
44
  For downloading the entire dataset:
45
- ```
46
  from datasets import load_dataset
47
-
48
  dataset = load_dataset("damerajee/long_context_hindi")
49
  ```
50
  If dataset is too big you can simply stream:
51
- ```
52
  from datasets import load_dataset
53
 
54
  dataset = load_dataset("damerajee/long_context_hindi",split='train',streaming=True)
55
  ```
56
- ```
57
  dataset.take(2)
58
  ```
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: doc_id
5
+ dtype: string
6
+ - name: type
7
+ dtype: string
8
+ - name: text
9
+ dtype: string
10
+ splits:
11
+ - name: train
12
+ num_bytes: 25324509618
13
+ num_examples: 806930
14
+ download_size: 9419131940
15
+ dataset_size: 25324509618
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: data/train-*
21
+ license: cc-by-4.0
22
+ task_categories:
23
+ - text-generation
24
+ language:
25
+ - hi
26
+ - en
27
+ pretty_name: 'long-context '
28
+ size_categories:
29
+ - 100K<n<1M
30
+ ---
31
 
32
  # Dataset
33
 
 
42
  # Getting started
43
 
44
  For downloading the entire dataset:
45
+ ```python
46
  from datasets import load_dataset
 
47
  dataset = load_dataset("damerajee/long_context_hindi")
48
  ```
49
  If dataset is too big you can simply stream:
50
+ ```python
51
  from datasets import load_dataset
52
 
53
  dataset = load_dataset("damerajee/long_context_hindi",split='train',streaming=True)
54
  ```
55
+ ```python
56
  dataset.take(2)
57
  ```