File size: 1,390 Bytes
946df68
 
 
9f5cbaf
 
 
 
946df68
 
9f5cbaf
946df68
9f5cbaf
 
946df68
 
 
 
 
 
63e41c7
 
 
 
 
 
 
 
9cfad48
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
dataset_info:
  features:
  - name: input_ids
    sequence: int32
  - name: attention_mask
    sequence: int8
  splits:
  - name: train
    num_bytes: 3977615851
    num_examples: 2293647
  download_size: 1879839994
  dataset_size: 3977615851
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
This dataset is an Arabic sample extracted from the [Fineeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) 
Arabic subset (arb_Arab) which is supposed to be standard Arabic. 
There are around 2.3 million rows in this sample. First, the whole dataset (57.8M rows) was scanned and rows 
were kept if they had over 95% Arabic words. Then this 2.3M sample was randomly sampled from the _mostly Arabic_ data. 
Notice that language_score is not an accurate measure. Also, this did not exclude slang, dialects or inappropriate 
content (no editing was done to any row and all columns were kept).  
The main purpose of this dataset is educational and I hope it helps researchers in designing and developing pre-processing 
for the main FineWeb2 dataset (or any other Arabic corpora).
Example: 
```python
from datasets import load_dataset 
from pprint import pprint
import random
ds = load_dataset("akhooli/fineweb2_ar_24_sample")
max_n = len(ds['train'])
index = random.randint(0,max_n) # random row
pprint(ds['train'][index]['text']) # article
```