long_context_hindi / README.md
damerajee's picture
Update README.md
26206b1 verified
|
raw
history blame
1.36 kB
metadata
dataset_info:
  features:
    - name: doc_id
      dtype: string
    - name: type
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 25324509618
      num_examples: 806930
  download_size: 9419131940
  dataset_size: 25324509618
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - text-generation
language:
  - hi
  - en
pretty_name: 'long-context '
size_categories:
  - 100K<n<1M

Dataset

This dataset was filtered from AI4BHarat dataset sangraha,which is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.

This dataset only Hindi as of now

Information

  • First this dataset is mainly for long context training
  • The minimum len is and maximum len is

Getting started

For downloading the entire dataset:

from datasets import load_dataset

dataset = load_dataset("damerajee/long_context_hindi")

If dataset is too big you can simply stream:

from datasets import load_dataset

dataset = load_dataset("damerajee/long_context_hindi",split='train',streaming=True)
dataset.take(2)