File size: 1,510 Bytes
38f3d81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a7b6d01
 
 
 
38f3d81
 
 
 
 
 
53bbe13
0e3f583
 
 
 
 
 
 
 
 
 
 
 
53bbe13
0e3f583
 
 
53bbe13
0e3f583
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
dataset_info:
  config_name: gpt-4
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  - name: token_length
    dtype: int64
  - name: text_length
    dtype: int64
  splits:
  - name: train
    num_bytes: 23230980331
    num_examples: 21462234
  download_size: 12219882718
  dataset_size: 23230980331
configs:
- config_name: gpt-4
  data_files:
  - split: train
    path: gpt-4/train-*
---

This is Wikidedia passages dataset for ODQA retriever.
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.

Token count 
```ts
{'~128': 1415068, '128~256': 1290011,
'256~512': 18756476, '512~1024': 667,
'1024~2048': 12, '2048~4096': 0, '4096~8192': 0,
'8192~16384': 0, '16384~32768': 0, '32768~65536': 0,
'65536~128000': 0, '128000~': 0}
```
Text count 
```ts
{'~512': 1556876,'512~1024': 6074975, '1024~2048': 13830329,
'2048~4096': 49, '4096~8192': 2, '8192~16384': 3, '16384~32768': 0,
'32768~65536': 0, '65536~': 0}
```
Token percent 
```ts
{'~128': '6.59%', '128~256': '6.01%', '256~512': '87.39%',
'512~1024': '0.00%', '1024~2048': '0.00%', '2048~4096': '0.00%',
'4096~8192': '0.00%', '8192~16384': '0.00%', '16384~32768': '0.00%',
'32768~65536': '0.00%', '65536~128000': '0.00%', '128000~': '0.00%'}
```
Text percent 
```ts
{'~512': '7.25%', '512~1024': '28.31%', '1024~2048': '64.44%',
'2048~4096': '0.00%', '4096~8192': '0.00%', '8192~16384': '0.00%',
'16384~32768': '0.00%', '32768~65536': '0.00%', '65536~': '0.00%'}
```