|
--- |
|
dataset_info: |
|
config_name: gpt-4 |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: token_length |
|
dtype: int64 |
|
- name: text_length |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 23230980331 |
|
num_examples: 21462234 |
|
download_size: 12219882718 |
|
dataset_size: 23230980331 |
|
configs: |
|
- config_name: gpt-4 |
|
data_files: |
|
- split: train |
|
path: gpt-4/train-* |
|
--- |
|
|
|
This is Wikidedia passages dataset for ODQA retriever. |
|
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken. |
|
|
|
Token count |
|
```ts |
|
{'~128': 1415068, '128~256': 1290011, |
|
'256~512': 18756476, '512~1024': 667, |
|
'1024~2048': 12, '2048~4096': 0, '4096~8192': 0, |
|
'8192~16384': 0, '16384~32768': 0, '32768~65536': 0, |
|
'65536~128000': 0, '128000~': 0} |
|
``` |
|
Text count |
|
```ts |
|
{'~512': 1556876,'512~1024': 6074975, '1024~2048': 13830329, |
|
'2048~4096': 49, '4096~8192': 2, '8192~16384': 3, '16384~32768': 0, |
|
'32768~65536': 0, '65536~': 0} |
|
``` |
|
Token percent |
|
```ts |
|
{'~128': '6.59%', '128~256': '6.01%', '256~512': '87.39%', |
|
'512~1024': '0.00%', '1024~2048': '0.00%', '2048~4096': '0.00%', |
|
'4096~8192': '0.00%', '8192~16384': '0.00%', '16384~32768': '0.00%', |
|
'32768~65536': '0.00%', '65536~128000': '0.00%', '128000~': '0.00%'} |
|
``` |
|
Text percent |
|
```ts |
|
{'~512': '7.25%', '512~1024': '28.31%', '1024~2048': '64.44%', |
|
'2048~4096': '0.00%', '4096~8192': '0.00%', '8192~16384': '0.00%', |
|
'16384~32768': '0.00%', '32768~65536': '0.00%', '65536~': '0.00%'} |
|
``` |